CN104754186B - Apparatus control method - Google Patents

Apparatus control method Download PDF

Info

Publication number
CN104754186B
CN104754186B CN201310754498.6A CN201310754498A CN104754186B CN 104754186 B CN104754186 B CN 104754186B CN 201310754498 A CN201310754498 A CN 201310754498A CN 104754186 B CN104754186 B CN 104754186B
Authority
CN
China
Prior art keywords
track
acoustic image
control
audio
speaker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310754498.6A
Other languages
Chinese (zh)
Other versions
CN104754186A (en
Inventor
周利鹤
李志雄
黄石锋
邓俊曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leafun Culture Science and Technology Co Ltd
Original Assignee
Guangzhou Leafun Culture Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leafun Culture Science and Technology Co Ltd filed Critical Guangzhou Leafun Culture Science and Technology Co Ltd
Priority to CN201310754498.6A priority Critical patent/CN104754186B/en
Publication of CN104754186A publication Critical patent/CN104754186A/en
Application granted granted Critical
Publication of CN104754186B publication Critical patent/CN104754186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The present invention relates to device for performing arts control technology, specifically a kind of apparatus control method.This method comprises: showing time shaft on the display interface of integrated control platform;The track of addition and/or deletion for being controlled corresponding performing device, the track includes light track;Editing rail attribute;Add material;Edit material attribute;Integrated control platform issues corresponding control instruction according to each track attribute and its material attribute.The technical problem for effect control that the present invention can solve the editor for solving current items on the program and synchronously control and acoustic image are run.

Description

Apparatus control method
Technical field
The present invention relates to device for performing arts control technology, specifically a kind of apparatus control method.
Background technique
In items on the program layout process, a comparison distinct issues are that each profession (refers to audio, video, light, machine Tool etc.) between coordination and synchronously control.In large-scale performance, each profession is relatively independent, need one it is huger Troop could ensure the smooth layout and performance of performance.And during each professional program of layout, the most of the time is all spent Coordination between profession with it is synchronous above, and compared with being really absorbed in may be much more in the time of program itself.
Since each profession is relatively independent, control mode differs greatly.To carry out live audio-visual synchronization editor, depending on Frequency is controlled by lamp control platform, and audio plays back editor control by more rails, and audio is easy to navigate to the arbitrary time and starts back It puts, but frame number (can only be adjusted to corresponding position manually by operator, but cannot followed the time by video from the beginning Code starting), this is short of enough flexibilities for live performance control.
In addition, passing through stage two sides after the speaker position of the Professional sound box system of existing video display, stage is fixed Left and right channel is main to amplify or acoustic image is substantially set in the central location of stage by main amplify in three road of left, center, right, although performance field Other than the master on stage amplifies, be also provided with large number of speaker at various locations, but whole field is performed, speaker The acoustic image of system almost can seldom change.
Therefore, the run flexible control of effect of the editor and synchronously control and acoustic image for solving current items on the program is all this Technical field key technical problem urgently to be resolved.
Summary of the invention
The technical problem to be solved by the present invention is to provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and The performance integrated control method being flexibly quickly arranged can be carried out to the acoustic image of sound reinforcement system.
In order to solve the above technical problems, the technical solution adopted by the present invention is that:
A kind of apparatus control method, comprising:
Time shaft is shown on the display interface of integrated control platform;
The track of addition and/or deletion for being controlled corresponding performing device, the track includes device track;
Editing rail attribute;
Add material;
Edit material attribute;
Integrated control platform issues corresponding control instruction according to each track attribute and its material attribute.
Compared with prior art, have the beneficial effect that: playback master control embodies the theory of " performance integrated management ".From skill For the angle of art, the lotus root conjunction property of these units is very low, they can be unique relatively more prominent with operating alone without influencing each other Connection out is " time ", i.e., when what is broadcasting.For the angle used from user, the relationship of " time " is but It is their most concerned things.It is checked and is managed if the state of these units can be concentrated in together, user will save Go many unnecessary troubles.Such as coordinate the stationary problem between each unit, the mutual ginseng of each profession in editing saving It examines and compares amendment etc..
Detailed description of the invention
Fig. 1 is the performance integrated control method schematic diagram of embodiment.
Fig. 2 is the method schematic diagram of the audio frequency control part of the performance integrated control method of embodiment.
Fig. 3 is the operating method schematic diagram of the audio sub-track of the performance integrated control method of embodiment.
Fig. 4 is the method schematic diagram of the video control portions of the performance integrated control method of embodiment.
Fig. 5 is the method schematic diagram of the signal light control part of the performance integrated control method of embodiment.
Fig. 6 is the method schematic diagram of the apparatus control portion point of the performance integrated control method of embodiment.
Fig. 7 is the schematic illustration of the performance integrated control system of embodiment.
Fig. 8 is the schematic illustration of the audio frequency control module of the performance integrated control system of embodiment.
Fig. 9 is the schematic illustration of the video control module of the performance integrated control system of embodiment.
Figure 10 is the schematic illustration of the lighting control module of the performance integrated control system of embodiment.
Figure 11 is the schematic illustration of the device control module of the performance integrated control system of embodiment.
Figure 12 is more rails playback editor module interface schematic diagram of the performance integrated control method of embodiment.
Figure 13 is the schematic illustration of the audio frequency control part of the performance integrated control system of embodiment.
Figure 14 is the schematic illustration of the track matrix module of the performance integrated control system of embodiment.
Figure 15 is the schematic illustration of the video control portions of the performance integrated control system of embodiment.
Figure 16 is the schematic illustration of the signal light control part of the performance integrated control system of embodiment.
Figure 17 is the schematic illustration of the apparatus control portion point of the performance integrated control system of embodiment.
Figure 18 is the step schematic diagram of the change rail acoustic image method for controlling trajectory of embodiment.
Figure 19 is the change rail acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is the speaker distribution map and change rail acoustic image track schematic diagram of embodiment.
Figure 21 is the triangle speaker node schematic diagram of embodiment.
Figure 22 is the variable domain acoustic image track data generation method step schematic diagram of embodiment.
Figure 23 is the speaker distribution map and variable domain acoustic image track schematic diagram of embodiment.
Figure 24 is the fixed point acoustic image track data generation method step schematic diagram of embodiment.
Figure 25 is the speaker link data creation method step schematic diagram of embodiment.
Figure 26 is the speaker link schematic diagram of embodiment.
Specific embodiment
The acoustic image TRAJECTORY CONTROL all types of to the present invention is further described with reference to the accompanying drawing.
The present embodiment provides one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can be to sound reinforcement system Acoustic image carry out the flexibly performance integrated control method that is quickly arranged.More rails playback editor's mould that this method passes through integrated control platform Block realizes concentration layout and control to multiple professional materials.As shown in Figure 1, performance integrated control method (the device control Method) the following steps are included:
S101: time shaft is shown on the display interface of integrated control platform;
S102: the track of addition and/or deletion for being controlled corresponding performing device;
S103: editing rail attribute;
S104: addition material;
S105: editor's material attribute;
S106: integrated control platform issues corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, which includes for more rail audio playback controls (with following audios Control module is corresponding), specifically includes the following steps:
S201: addition audio track, addition is parallel in the display interface and is aligned in the one or more of the time shaft Audio track (region) 1,2, each corresponding output channel of the audio track.
S202: editor's audio track attribute, editable audio track attribute include that lock on track, track are mute.Track Whether mute attribute can control audio material on this track and all sub-tracks mute, be the master control of audio track.Track locks Determine attribute can control on track except it is mute and add hide sub-track in addition to etc. outside individual attributes, in other attributes and audio track Material position and material attribute cannot be modified.
S203: addition audio material, added in audio track 1,2 one or more audio material 111,112,113, 211,212,213,214, and audio material corresponding with the audio material is generated in audio track, occupied by the audio material The length of audio track match with the total duration of the audio material.Before adding audio material, first from audio server Audio material list is obtained, audio material addition is then selected from the audio material list again and enters audio track.Work as audio After material is added to audio track, audio attribute file corresponding with the audio material will be generated, integrated control platform passes through editor Audio attribute file is sent to the instruction of audio server to control, rather than calls directly or edit the corresponding source of audio material File, it is ensured that the stability of the safety of source file and integrated control platform.
S204: editor audio material attribute, the audio material attribute include start position, final position, the time started, End time, total duration, play time length.Wherein, the start position is the audio material start position (along Vertical Square To) corresponding to the time shaft moment, the final position be the audio material final position (vertically) corresponding to when Between the axis moment, the time started is the practical beginning playing time of the audio material on a timeline, and the end time is The physical end play position of the audio material on a timeline.In general, the time started can be delayed in start position, terminate Time can shift to an earlier date in final position.Total duration refers to the script time span of audio material, start position to terminal position when Between difference be audio material total duration, play time length refers to the play time length of the audio material on a timeline, The time difference of time started and end time are the play time length of the audio material.By adjusting time started and end The shearing manipulation to acoustic image material may be implemented in time, i.e., only plays user and wish the part played.
It can change start position and terminal position by adjusting position of (transverse shifting) audio material in audio track It sets, but start position will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change. At the beginning of by adjusting audio material and the end time can change the actual play time of audio material on a timeline And its length.It can place multiple audio materials in one audio track, indicate in the period represented by time shaft, it can be with (through corresponding output channel) plays in order multiple audio materials.It should be noted that the audio material in any audio track Position (time location) can freely adjust, but should not be overlapped between each audio material.
Further, since integrated control platform only controls the corresponding property file of audio material, control is integrated Platform can also carry out shearing operation and concatenation to audio material.Shearing operation refers to an audio element on audio track Material is divided into multiple audio materials, while each audio material after segmentation has corresponding property file, at this time source file Still intact, integrated control platform issues control command according to these new property files and source file is successively called to be played accordingly It is operated with audio.Similar, concatenation, which refers to, is merged into an audio material, corresponding category for two audio materials Property file mergences be a property file, pass through property file and issue control audio server and call two audio source documents Part.
Further, multiple groups application entity behaviour corresponding with each audio track respectively can also be set on integrated control platform Make key, to manually adjust the attribute of audio material by physical operation key.Such as increase to the audio material position in audio track Set the material play time adjustment knob adjusted before and after (time shaft position).
S205: addition audio sub-track 12,13,14,15,21,22 adds and wherein one audio track corresponding one A or multiple audio sub-tracks, each audio sub-track are parallel to the time shaft, and the audio sub-track is corresponding The output channel of the audio track is corresponding.
Each audio track can have an attached audio sub-track, the type of audio sub-track include acoustic image sub-track and Audio sub-track.Wherein, the acoustic image sub-track is used to carry out acoustic image to some or all of affiliated audio track audio material Trajectory processing, the audio sub-track are used to carry out audio effect processing to some or all of affiliated audio track audio material.? In this step, it can further perform the step of:
S301: addition acoustic image sub-track harmony pixel material adds one or more acoustic image materials in acoustic image sub-track 121,122, and acoustic image material corresponding with the acoustic image material is generated in the acoustic image sub-track, sound occupied by the acoustic image material As the length of sub-track matches with total duration corresponding to the acoustic image material.
S302: editor's acoustic image sub-track attribute, similar with the audio track, editable acoustic image sub-track attribute includes Lock on track, track are mute.
S303: editor's acoustic image material attribute, similar with the audio material, the acoustic image material attribute has also included point It sets, final position, time started, end time, total duration, play time length.
By the acoustic image material on acoustic image sub-track, can between the acoustic image material starting and end time when Between in section, acoustic image trajectory processing is carried out to the signal of the output of output channel corresponding to the affiliated audio track of acoustic image sub-track.Cause This adds different types of acoustic image material on acoustic image sub-track, can carry out inhomogeneity to the signal that corresponding output channel exports The acoustic image trajectory processing of type;And the start position by adjusting each acoustic image material, final position, the time started and at the end of Between, time and acoustic image path effect duration that adjustable acoustic image trajectory processing starts.
The difference of acoustic image material and audio material is that audio material represented is audio data.Acoustic image track data is Within the period of setting length, in order to make each virtual speaker node output level in speaker distribution map be formed by acoustic image edge Preset path is run or is remained stationary, the output level data that each speaker node changes over time.That is acoustic image track number According to containing in speaker distribution map output level delta data of whole speaker nodes in the setting length of time section.Acoustic image The type of track data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, acoustic image rail The type of mark data determines that the type of acoustic image material, the movement of acoustic image corresponding to acoustic image track data total duration determine acoustic image Time difference between material start position and final position, the i.e. total duration of acoustic image material.Acoustic image trajectory processing refers to according to sound As the size of track data pair each speaker entity reality output level corresponding with each speaker node is adjusted, make speaker The acoustic image of physical system was run or is remained stationary along setting path within the period of setting length.
S304: addition audio sub-track, the type of the audio sub-track includes volume and gain sub-track 13,22, EQ Sub-track 14,15,21, the settable volume of each audio track and gain sub-track, and one or more EQ sub-track. Wherein, the volume and gain sub-track are for adjusting the signal level size of the corresponding output channel of affiliated audio track Whole, the signal that the EQ sub-track is used for the output to the corresponding output channel of affiliated audio track carries out EQ audio effect processing.
S305: the attribute of editor's audio sub-track, the attribute of the audio sub-track include lock on track, track it is mute and It further include audio effect processing parameter corresponding with audio sub-track type outside track identities.For example, volume and gain sub-track include Audio effect processing parameter be output level size adjustment parameter, the audio effect processing parameter that EQ sub-track includes be EQ processing parameter. Adjustable affiliated audio track of audio sub-track of sound effect parameters by modifying audio sub-track corresponds to the sound of output channel Effect.
S206: saving data, or according to audio track and its attribute of sub-track, audio material harmony pixel material attribute is raw Pairs of audio material corresponds to the control instruction of source file, and plays out control according to source file of the control instruction to audio material System and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call the audio source file of (broadcasting) audio material, at the beginning of source file plays Between and end time (be subject to time shaft at the time of), the acoustic image and audio effect processing of source file, specific control instruction and each sound The attribute of frequency track and its attached sub-track, audio material, the attribute of acoustic image material are corresponding.That is audio track is not straight It connects calling and handles the source file of audio material, and only handle property file corresponding to the audio source file, pass through editor The attribute of property file, addition/editor's acoustic image material and the audio track and its sub-track that adjust source file is realized to audio Source file indirectly controls.
Such as be added to audio track audio material will enter playlist, the audio track starting play when, the sound Frequency material will be played;By editing audio track attribute, it can control the mute attribute of audio track, can control the sound Whether frequency track and its attached sub-track mute (effective), lock attribute by editor's audio, can control on track except mute and Outside the hiding sub-track of addition etc. outside individual attributes, material position and material attribute in other attributes and audio track cannot be repaired Change (lock state).More detailed description can refer to the narration of front.
As shown in Fig. 4 and Figure 12, the performance integrated control method of the present embodiment is also an option that increase video playback control (corresponding with following video control modules), specifically includes the following steps:
S401: addition track of video, (in the display interface) addition is parallel and is aligned in the track of video 4 of the time shaft (region), the track of video correspond to a controlled plant, use video server in the present invention.
S402: editor's track of video attribute, editable track of video attribute include that lock on track, track are mute.Video Track attribute is similar with audio track attribute.
S403: addition video material, the one or more video materials 41,42,43,44 of addition in track of video, and Generate corresponding with video material video material in track of video, the length of track of video occupied by the video material and this The total duration of video material matches.Before adding video material, video material list first is obtained from video server, then Video material addition is selected from the video material list again and enters track of video.After video material is added to track of video, Video attribute file corresponding with the video material will be generated, integrated control platform is controlled by editor's video attribute file to be sent To the instruction of video server, rather than call directly or edit the corresponding source file of video material, it is ensured that the safety of source file Property and integrated control platform stability.
S404: editor video material attribute, the video material attribute include start position, final position, the time started, End time, total duration, play time length.Video material attribute is similar with audio material attribute, while audio material can also To carry out transverse shifting, cutting and concatenation, or increases on integrated control platform and adjust one group corresponding with track of video Physical operation key, to manually adjust the attribute of video material by physical operation key.
S405: saving data, or according to track of video attribute, the attribute generation of video material corresponds to source document to video material The control instruction of part, and control and acoustic image, audio effect processing control are played out according to source file of the control instruction to video material System.Similar with track of video, attribute, the attribute of video material of specific control instruction audio track are corresponding.
As depicted in figure 5 and figure 12, the performance integrated control method of the present embodiment, which is also an option that, increases signal light control (under It is corresponding to state lighting control module), specifically includes the following steps:
S501: addition light track, (in the display interface) addition is parallel and is aligned in the light track 3 of the time shaft (region), the light track correspond to a controlled plant, use light network signal adapter (such as Artnet net in the present invention Card).
S502: editor's light track attribute, editable light track attribute include that lock on track, track are mute.Light Track attribute is similar with audio track attribute.
S503: addition light material adds one or more light materials 31,32,33 in light track, and in lamp Light material corresponding with the light material, the length and the lamp of light track occupied by the light material are generated in light track The total duration of light material matches.Similar with audio material, video material, there is no load light materials for light track, and only It is to generate property file corresponding with the light material source file, control instruction is issued by property file to control light material The output of source file.
Light material is the light network control data of certain time length, such as Artnet data, Artnet data envelope Equipped with DMX data.Light material can generate in the following manner: after the good light program of conventional lights control platform layout, it is logical to integrate control platform The light network interface that its light network interface is connected on conventional lights control platform is crossed, the signal light control letter of lamp control platform output is recorded Number, while integrated control platform needs to stamp timing code to the light controling signal recorded in recording process, so as in light rail The enterprising edlin control in road.
S504: editor light material attribute, the light material attribute include start position, final position, the time started, End time, total duration, play time length.Light material attribute is similar with audio material attribute, while audio material can also To carry out transverse shifting, cutting and concatenation, or increases on integrated control platform and adjust one group corresponding with light track Physical operation key, to manually adjust the attribute of light material by physical operation key.
S505: saving data, or according to light track attribute, the attribute generation of light material corresponds to source document to light material The control instruction of part, and control and acoustic image, audio effect processing control are played out according to source file of the control instruction to video material System.Similar with track of video, attribute, the attribute of video material of specific control instruction audio track are corresponding.
As shown in Fig. 6 and Figure 12, the performance integrated control method of the present embodiment, which is also an option that, increases device control (under It is corresponding to state device control module), specifically includes the following steps:
S601: adding set track, (in the display interface) addition are parallel to one or more devices of the time shaft The region track 5(), the corresponding controlled device of each described device track, such as mechanical device.It is needed before adding set track Confirm controlled device and integrated control platform establishes connection.It is integrated to control platform and connection be established by TCP by control device, Such as integrated control platform is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP client termination of controlled device The integrated TCP server for controlling platform is actively connected to after entering network.
S602: editing device track attribute, editable device track attribute include that lock on track, track are mute.Device Track attribute is similar with audio track attribute, if device track selects mute, the attached control sub-track of the whole of device track Any operation is not executed.
S603: addition control sub-track adds one or more sub- rails of control corresponding with a wherein described device track Road, each control sub-track parallel (and for) described time shaft, the corresponding described device of each control sub-track Controlled plant corresponding to track is corresponding.
S604: addition control material according to the control material of the type addition respective type of control sub-track, and is being added It is generated on the control sub-track added and controls sub- material accordingly, control sub-track length occupied by the control material and the control The total duration of material matches.
The type of control sub-track includes TTL control sub-track, relay control sub-track, network-control sub-track, phase It answers, may be added to that the control material of TTL control sub-track includes TTL material 511,512,513(such as TTL high level control element Material, TTL low level control material), may be added to that relay sub-track control material include relay material 521,522, 523,524(such as relay opens control material, relay closes control material), it may be added to that the control material of network-control sub-track Including network materials 501,502,503(for example TCP/IP communication control material, UDP communication control material, 232 communication control materials, 485 protocol communications control material etc.).Sub- material is controlled accordingly by addition, and capable of emitting corresponding control instruction controls son element Material is substantially exactly control instruction.
S605: the sub- material attribute of editor control, attribute include start position, final position, total duration.By adjusting (horizontal To movement) position of the sub- material of control in accordingly control sub-track can change start position and final position, but play point Setting will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change.Control material Start position is the time shaft moment begun to send out with the corresponding control instruction of control material to corresponding controlled device, is terminated Position is to terminate the time shaft moment for sending control instruction.
Further, incidence relation can also be set between the control material in same control sub-track, makes to be located at The point position corresponding time shaft moment controls the corresponding control command of material earlier and is not carried out success, then (collection will not be issued At control platform) or do not execute the later corresponding control of association control material of (controlled device) start position corresponding time shaft moment Instruction, such as folding, the elevating control of curtain.
Further, the guard time for controlling the settable certain time length in control material front and back of sub-track, i.e., in control Track cannot add control material in guard time or cannot issue control command.
S606: saving data, or according to control track and its attribute of control sub-track, the attribute for controlling material generates control System instruction, and the control instruction is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of performance integrated control system, as shown in fig. 7, the system includes integrated control platform 70, and selection includes audio server 76, video server 77, lighting control module 78 and device control module 79.Wherein, The integrated control platform 70 includes Multi-track editing playback module 71, and it is integrated that above-mentioned performance can be performed in the Multi-track editing playback module 71 One of the control of control method sound intermediate frequency, video control, signal light control and device control or various control are controlled, it is specific real Details are not described herein for existing step.More rail playback editor control modules include audio frequency control module 72, and selection includes view Frequency control module 73, lighting control module 74 and device control module 75.
As shown in figure 8, the audio frequency control module 72 includes audio track adding module 81, audio track attributes edit mould Block 82, audio material attributes edit module 84, audio sub-track adding module 85, saves number at audio material adding module 83 According to/output audio frequency control instruction module 86, the function that these modules are realized is a pair of with abovementioned steps S201 to S206 mono- respectively It answers, details are not described herein, similarly hereinafter.
Further, the audio broadcasting control principle of the performance integrated control system is as shown in figure 13, the integrated control System further includes quick playback editor module, is physically entered module, more rails playback editor module, is used described in quick playback editor module In real-time edition audio material, and issues corresponding control instruction and play the corresponding source document of audio material to audio server 76 Part, the physical operations key for being physically entered module and corresponding on integrated control platform 71, for the sound to external input set at control platform Source carries out real-time tuning control.
Correspondingly, being equipped with audio mixing matrix module, track matrix module, 3x1 in the audio server exports mix module With physics output module, the audio mixing matrix module be can receive from the quick playback editor module, more rails playback editor's mould Audio signal that the audio source file in the audio server that block is called by control command form exports and described It is physically entered the audio signal of module output, the similar track matrix module also can receive above-mentioned each road audio input.Institute Audio mixing matrix is stated for exporting after carrying out stereo process to each road audio input to the output mix module, the track matrix Module is used to after carrying out acoustic image trajectory processing to each road audio input export to the output mix module.The output audio mixing mould Block can receive from audio mixing matrix module, track matrix module and the audio output for being physically entered module, after 3x1 stereo process Each physics output interface output through the physics output module.Wherein, acoustic image trajectory processing refers to according to acoustic image track number It is adjusted according to the level of output to each speaker entity, makes the acoustic image of speaker physical system within the period of setting length It runs or remains stationary along setting path.
In the present embodiment, the source file of audio material is stored on the audio server outside integrated control platform, and more rails return The source file that editor module did not called directly and handled audio material is put, and only handles category corresponding to the audio source file Property file, pass through editor adjustment source file property file, addition/editor's acoustic image material and audio track and its sub-track Attribute realization audio source file is indirectly controlled, therefore the corresponding output channel output of each audio track only for Then control signal/instruction of audio source file executes audio source file by receiving the audio server of the control instruction again Various processing.
As shown in figure 14, more rail playback editor modules receive effective audio material list from audio server 76, Audio source file is not handled directly, and audio source file is stored in the audio server, is receiving corresponding control command After recall audio source file and carry out various audio effect processings, such as stereo process is carried out into audio mixing matrix module, into track Matrix module carries out trajectory processing.Acoustic image material is actually also control command, can both be stored in integrated control platform 71, can also be with It is uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video adding module 91, track of video attributes edit mould Block 92, video material attributes edit module 94, saves data/output video control instruction module at video material adding module 93 95, the function that these modules are realized is corresponded with abovementioned steps S401 to 405 respectively.
Further, the video editing of the performance integrated control system and broadcasting control principle are as shown in figure 15, described Integrated control platform does not execute the source file of video material directly, but by obtaining video material list and corresponding attribute text Part issues control instruction to video server, and video server, which connects, broadcasts the source file execution of video material further according to control instruction It puts and effect operation.
As shown in Figure 10, the lighting control module 74 includes light track adding module 110, light track attributes edit Module 120, light material attributes edit module 140, saves data/output signal light control instruction at light material adding module 130 Module 150, the function that these modules are realized are corresponded with abovementioned steps S501 to 505 respectively.
Further, the signal light control principle of the performance integrated control system is as shown in figure 16, and the integrated control platform is also Module is recorded equipped with light signal, for recording the light controling signal of lamp control platform output, and to being recorded in recording process Light controling signal stamp timing code, so as in the enterprising edlin control of light track.
As shown in figure 11, described device control module 75 includes device track adding module 151, device track attributes edit Module 152, control material adding module 154, control material attributes edit module 155, is protected at control sub-track adding module 153 Deposit data/output signal light control instruction module 156, the function that these modules are realized respectively with abovementioned steps S601 to 606 1 One is corresponding.
Further, the device control principle of the performance integrated control system is as shown in figure 17, and the integrated control platform is defeated All kinds of devices control signal out is exported through each protocol interface on device adapter to corresponding controlled plant.
In addition, the integrated control platform can also include the sound for making (generation) acoustic image track data (i.e. acoustic image material) As track data generation module, the acoustic image track data obtained through the module is for more rail playback editor's execution module tune With controlling to control audio server track matrix module acoustic image track.Further, the present embodiment provides A kind of change rail acoustic image method for controlling trajectory, the control method is by control host (such as integrated control platform, audio server) to entity The output level value of each speaker node of sound box system is configured, and moves acoustic image in the way of setting in the total duration of setting Or it is static, as shown in figure 18, which includes:
S101: acoustic image track data is generated;
S102: in the total duration corresponding to the acoustic image track data, according to acoustic image track data, it is real to adjust each speaker The output level of body;
S103: in the total duration, the incoming level and corresponding speaker entity of each speaker physical signal will be input to Output level is overlapped to obtain the level of each speaker entity reality output.
Acoustic image track data refers within the period of setting length (i.e. the lasting total duration of acoustic image), in order to make integrated control Each virtual speaker node output level is formed by acoustic image and runs along preset path in virtual speaker distribution map on platform It moves or remains stationary, the output level data that each speaker node changes over time.I.e. acoustic image track data contains speaker distribution Output level delta data of whole speaker nodes in the setting length of time section in map.Each speaker node is come Say, its output level size is changed with time change in the set period of time, it is also possible to be zero, negative even Bear it is infinite, it is preferential infinite using bearing.
Each speaker node corresponds to a speaker entity in entity sound box system, and each speaker entity includes being located at together One or more speakers at one position.I.e. each speaker node can correspond to one or more co-located speakers. The virtual speaker of each sound in order to allow entity sound box system accurately to reappear acoustic image path, in speaker distribution map The position distribution of node should be corresponding with each speaker provider location distribution of entity sound box system, in particular so that each speaker node it Between relative positional relationship, the relative positional relationship between each speaker entity is corresponding.
The level of speaker entity reality output is real with the speaker in the level and above-mentioned acoustic image track data of input signal The output level of the corresponding speaker node of body is superimposed gained.The former be input signal characteristic, the latter can be considered as speaker reality The characteristic of body itself.At any one time, different input signals just has different incoming levels, and real for same speaker Body, only one output level.It is, therefore, understood that acoustic image trajectory processing is at the output level to each speaker entity Reason, to form preset acoustic image path effect (including acoustic image is stationary).
Incoming level and the output level superposition of speaker entity can be before audio signal actually enter speaker entity first It being handled, can also be handled again after entering speaker entity, this link for depending on entire public address system is constituted, with And whether speaker entity is built-in with audio-frequency signal processing module, such as DSP unit.
The type of acoustic image track data includes: fixed point audio-visual-data, becomes rail acoustic image track data and variable domain acoustic image track.? When simulation generates acoustic image track data on integrated control platform, the speed and process of acoustic image are controlled for convenience, the present invention is real Sequentially connected line segment is applied between several acoustic image TRAJECTORY CONTROL points that example passes through discrete distribution in speaker distribution map to indicate sound The path of running of picture determines the path of running of acoustic image, Yi Jisheng by several acoustic image TRAJECTORY CONTROL points of discrete distribution The overall running time of picture.
Acoustic image is pinpointed, is referred within the period of setting length, the one or more speakers selected in speaker distribution map Node constantly output level, and unselected speaker node output level numerical value is zero or bears infinite situation.Correspondingly, fixed Point audio-visual-data refers within the period of setting length, the one or more speaker nodes selected in speaker distribution map are held Continuous ground output level, and unselected speaker node is when output level or output level numerical value are not zero or bear infinite, Ge Geyin The output level data that case node changes over time.For selected speaker node, its output level is in the setting time Continuously (there may also be upper and lower fluctuating change);And for unselected speaker node, its output level in the setting time Holding is negative infinite.
Become rail acoustic image, refers within the period of setting length, in order to make acoustic image run along preset path, each speaker node According to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the period of setting length, In order to make acoustic image run along preset path, output level data that each speaker node changes over time.Acoustic image runs path simultaneously Do not need it is exactly accurate, and acoustic image movement (running) duration will not be very long, it is only necessary to substantially construction audience can recognize Acoustic image run effect.
Variable domain acoustic image referred within the period of setting length, in order to make acoustic image run along predeterminable area, each speaker node The situation that changes according to certain rule of output level.Correspondingly, variable domain acoustic image track data referred in the time of setting length, In order to make acoustic image run along predeterminable area, output level data that each speaker node changes over time.
As shown in figure 19, becoming rail acoustic image track data can obtain by the following method:
S201: setting speaker node: in speaker distribution map 10, adding or deleting speaker node 11, and referring to fig. 20.
S202: modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to Road, initialize level, speaker title etc..Speaker node is indicated in speaker distribution map with speaker icon, passes through mobile sound Case icon can change its coordinate position.Speaker type refers to that full-range cabinet or ultralow frequency speaker, concrete type can be according to reality It is divided.Each speaker node in speaker distribution map is all assigned an output channel, each output channel pair It should include one or more sounds at co-located place in a speaker entity in entity sound box system, each speaker entity Case.I.e. each speaker node can correspond to one or more co-located speakers.In order to reappear in speaker distribution map Designed acoustic image is run path, and the position distribution of speaker entity should be with the position distribution pair of speaker node in speaker distribution map It answers.
S203: it divides delta-shaped region: as shown in figure 20, according to the distribution of speaker node, speaker distribution map being divided Three vertex of multiple delta-shaped regions, each delta-shaped region are speaker node;Each delta-shaped region is not overlapped, and every Do not include other speaker nodes in a delta-shaped region, each speaker node and output channel (or audio plays dress Set) it is corresponding;
Further, it can also assist determining delta-shaped region, the auxiliary sound box by setting auxiliary sound box node Node does not have corresponding output channel, not output level;
S204: setting acoustic image TRAJECTORY CONTROL point and path of running: acoustic image is set in speaker distribution map and is changed over time Path 12 of running, and several acoustic image TRAJECTORY CONTROL points 14 run on path positioned at this.It can set with the following method Determine acoustic image to run path and acoustic image TRAJECTORY CONTROL point:
1, fixed point building: successively determining (coordinate) position of several acoustic image TRAJECTORY CONTROL points in speaker distribution map, Several acoustic image TRAJECTORY CONTROL points are in turn connected to form acoustic image to run path, first determining acoustic image TRAJECTORY CONTROL point pair It is zero at the time of answering, works as from determining first acoustic image TRAJECTORY CONTROL point to determination at the time of subsequent acoustic image TRAJECTORY CONTROL point corresponds to The preceding acoustic image TRAJECTORY CONTROL point time experienced.Such as it can be by clicking sign (such as mouse pointer) speaker distribution Acoustic image TRAJECTORY CONTROL point is clicked on figure, determines that an acoustic image TRAJECTORY CONTROL point-to-point hits determining next acoustic image track and controls from clicking System point elapsed time determines the time span between two acoustic image tracing points, and each acoustic image track is finally calculated At the time of point is corresponding;
2, dragging generates: dragging label (such as mouse pointer) is along arbitrary line, curve or broken line in speaker distribution map Movement path so that it is determined that acoustic image is run, during dragging label, since initial position, all at interval of a period of time Ts An acoustic image TRAJECTORY CONTROL point can be generated on the path of running.Ts is 108ms in the present embodiment;
S205: editor's acoustic image TRAJECTORY CONTROL point attribute: the attribute of acoustic image TRAJECTORY CONTROL point includes that acoustic image TRAJECTORY CONTROL point is sat Cursor position, it is corresponding at the time of, to the time needed for next acoustic image TRAJECTORY CONTROL point.It can be to selected acoustic image TRAJECTORY CONTROL point institute At the time of corresponding, time needed for the selected acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point and acoustic image run path pair One or more of total duration answered is modified.
Assuming that being ti at the time of acoustic image TRAJECTORY CONTROL point i is corresponded to, acoustic image is run from acoustic image TRAJECTORY CONTROL point i to next rail The mark point i+1 former required time is ti ', and acoustic image runs the corresponding total duration in path as t.This means that acoustic image is run from initial position Moving to the time needed at acoustic image TRAJECTORY CONTROL point i is ti, and the time needed for acoustic image runs through entire path is t.
It is complete before the acoustic image TRAJECTORY CONTROL point if modifying at the time of to corresponding to a certain acoustic image TRAJECTORY CONTROL point The run total duration in path of portion's acoustic image TRAJECTORY CONTROL point corresponding moment and acoustic image requires to be adjusted.If acoustic image It is ti at the time of TRAJECTORY CONTROL point i original corresponds to, is Ti at the time of correspondence after modification, any sound before acoustic image TRAJECTORY CONTROL point i It is Tj at the time of correspondence after adjustment for tj at the time of correspondence as TRAJECTORY CONTROL point J original, acoustic image is run the corresponding original total duration in path For t, modified total duration is T, then Tj=tj/ti*(Ti-ti), T=t+(Ti-ti).The adjustment mode letter that the present invention uses It is single reasonable, and calculation amount very little.
It is understood that the time increased or decreased can after time modification corresponding to any acoustic image TRAJECTORY CONTROL point Whole acoustic image TRAJECTORY CONTROL points (i.e. foregoing manner) before the acoustic image TRAJECTORY CONTROL point are distributed in identical duration ratio, it can also With the whole acoustic image TRAJECTORY CONTROL points run on path by each acoustic image of duration pro rate.When using latter approach, it is assumed that sound Be ki as TRAJECTORY CONTROL point i prepares increased time, then acoustic image TRAJECTORY CONTROL point will be modified at the time of correspondence Ti= (ki*ti/t)+ti, i.e. time ki are not all to distinguish dispensing acoustic image TRAJECTORY CONTROL point, and each acoustic image TRAJECTORY CONTROL point is all Portion of time can be distributed in its ratio with path total duration of running.
If being adjusted to the time needed for a certain acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point, then next At the time of corresponding to acoustic image TRAJECTORY CONTROL point and the run total duration in path of acoustic image requires to be adjusted.If acoustic image track At the time of control point i original corresponds to be ti, at the time of correspondence after modification be Ti, acoustic image from current acoustic image TRAJECTORY CONTROL point i run to The next tracing point i+1 former required time be Ti ' the required time after ti ' modification, acoustic image run the corresponding original in path it is total when A length of t, modified total duration is T, then Ti+1=Ti+Ti ', T=t+(Ti-ti)+(Ti '-ti ').
The corresponding total duration in path if modification acoustic image is run, each acoustic image TRAJECTORY CONTROL on path then the acoustic image is run It will be all adjusted at the time of point is corresponding and its to the time needed for next acoustic image TRAJECTORY CONTROL point.If acoustic image TRAJECTORY CONTROL point It is ti at the time of i original corresponds to, is Ti at the time of correspondence after adjustment, acoustic image is run from current acoustic image TRAJECTORY CONTROL point i to next rail The mark point i+1 former required time be after ti ' adjustment needed for time be Ti ', the acoustic image corresponding original total duration in path of running is t, Modified total duration is T, then Ti=ti/t* (T-t)+ti, Ti '=ti '/t*(T-t)+ti '.
S206: record becomes rail acoustic image track data: recording each speaker node and runs process in acoustic image along setting path of running In each moment output level numerical value.
For becoming for rail acoustic image, the output electricity of the related speaker node for generating acoustic image can be calculated by the following method Level values.It is located at as shown in figure 21, it is assumed that acoustic image tracing point i(is not necessarily acoustic image TRAJECTORY CONTROL point) and is enclosed by three speaker nodes Made of in delta-shaped region, be ti at the time of acoustic image tracing point i is corresponded to, the three of vertex position speaker node will be defeated at this time A certain size level out, the output level value of other speaker nodes in speaker distribution map other than these three speaker nodes Be zero or bear it is infinite, to guarantee that the acoustic image at ti moment in speaker distribution map is located at above-mentioned acoustic image tracing point i.For this three The output level of the speaker node A of any apex of angular domain, this moment ti are dBA1=10*lg(LA’/LA), wherein LA’ For remaining two straight distance of vertex institute structure of the acoustic image tracing point to the delta-shaped region, LAFor the speaker node A to remaining The two straight distances of vertex institute structure;
Further, initialize level value can also be arranged in each speaker node.Assuming that the initialization of above-mentioned speaker node A Level is dBA, then in above-mentioned moment ti, the output level dB of speaker node A1A1’=dBA+ 10*lg(LA’/LA).Remaining speaker The output level and so on of t moment after initialize level is arranged in node.
Further, as shown in figure 20, any one is not fallen within if any part acoustic image tracing point (or acoustic image run path) In the delta-shaped region be made of three speaker nodes (motion profile end), then auxiliary sound box node 13 can be set to be arranged New delta-shaped region, to guarantee that whole acoustic image tracing points are each fallen in corresponding delta-shaped region, the auxiliary sound box node There is no corresponding output channel, not output level, is only used for assisting determining delta-shaped region;
Further, it when recording the output level value of each speaker node, can continuously record, it can also be according to certain Frequency records.For the latter, refer to the output level numerical value that primary each speaker node is recorded at interval of certain time.In this reality It applies in example, the output of each speaker node when acoustic image is run along setting path is recorded in using 25 frames/second or 30 frames/second frequency Level value.The output level data of each speaker node are recorded by certain frequency, it is possible to reduce data volume is accelerated to input audio Signal carries out processing speed when acoustic image trajectory processing, guarantees that acoustic image is run the real-time of effect.
As shown in figure 22, variable domain acoustic image track data can obtain by the following method:
S501: setting speaker node: in speaker distribution map, speaker node is added or deleted.
S502: modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to Road, initialize level, speaker title etc..Speaker node is indicated in speaker distribution map with speaker icon, passes through mobile sound Case icon can change its coordinate position.Speaker type refers to that full-range cabinet or ultralow frequency speaker, concrete type can be according to reality It is divided.Each speaker node in speaker distribution map is all assigned an output channel, each output channel pair It should include one or more sounds at co-located place in a speaker entity in entity sound box system, each speaker entity Case.I.e. each speaker node can correspond to one or more co-located speakers.In order to reappear in speaker distribution map Designed acoustic image is run path, and the position distribution of speaker entity should be with the position distribution pair of speaker node in speaker distribution map It answers.
S503: setting acoustic image, which is run, path and divides acoustic image region: multiple acoustic image regions are set in speaker distribution map, Include several speaker nodes in each acoustic image region, and the path of running for traversing each acoustic image region is set.I.e. by acoustic image area Domain is considered as one " acoustic image point ", and acoustic image is run from a region to another region, until successively running through whole acoustic image regions.It can With the acoustic image region of each complementary overhangs of setting any in speaker distribution map, can also quickly be arranged in the following manner Acoustic image region:
Straight line acoustic image is arranged in speaker distribution map to run path, and several acoustic image areas are set along acoustic image path of running The boundary in domain, each acoustic image region is approximately perpendicular to the direction of running of the acoustic image.These acoustic image regions can be arranged side by side, can also It is arranged with interval, but the continuity in order to guarantee raw acoustic image mobile (running), it is preferential to select that mode is arranged side by side.These acoustic image areas The gross area in domain is less than or equal to the area of entire speaker distribution map.When dividing acoustic image region, wide division can be used, Not wide division can be used.
When specific operation, it can come while acoustic image be arranged to run path and division by dragging sign (such as mouse pointer) Acoustic image region.Specifically: dragging sign is moved to end from a certain start position along some direction in speaker distribution map Point position, while several acoustic image regions are divided according to the linear distance equalization of the start position to the final position, Ge Gesheng As the boundary in region is perpendicular to the straight line of the start position to the final position, and the width in each acoustic image region is impartial.Sound The middle final position time experienced is moved to from initial position as running total duration for dragging sign.
Assuming that linear distance of the sign from start position to final position is R, total duration used is t, and equalization is drawn The quantity for dividing acoustic image region is n, then the n acoustic image region that width is R/n will be automatically generated, and each acoustic image region institute is right It is t/n at the time of answering.
S504: editor's acoustic image zone time attribute, including at the time of corresponding to acoustic image region, current acoustic image region is to next The time required to acoustic image region and acoustic image is run total duration.The editor of acoustic image area attribute and change rail acoustic image tracing point attribute are compiled It collects similar.If modifying at the time of to corresponding to a certain acoustic image region, whole acoustic image regions before the sound area domain are respectively At the time of corresponding and total duration that acoustic image is run requires to be adjusted.If to a certain acoustic image region to next acoustic image region The required time is adjusted, then at the time of corresponding to next acoustic image region and acoustic image total duration of running requires to carry out Adjustment.If modification acoustic image is run total duration, then at the time of the acoustic image is run corresponding to each acoustic image region on path and its It will be all adjusted to the time needed for next acoustic image region.
S505: record variable domain acoustic image track data records each speaker node and successively runs in acoustic image along path of running is set During each acoustic image region, the output level numerical value at each moment.
For variable domain acoustic image, the output electricity of the related speaker node for generating acoustic image can be calculated by the following method Level values.
As shown in figure 23, it is assumed that the acoustic image of a certain variable domain track runs total duration as t, is divided into 4 equal widths altogether Acoustic image region, acoustic image run path from some acoustic image region 1(acoustic image region i) to next acoustic image region along the acoustic image of straight line 2(acoustic image region i+1) it is mobile, the run midpoint of line segment that path is located in acoustic image region 1 of acoustic image is acoustic image TRAJECTORY CONTROL point 1 (acoustic image TRAJECTORY CONTROL point i), the run midpoint of line segment that path is located in acoustic image region 2 of acoustic image is acoustic image TRAJECTORY CONTROL point 2(sound As TRAJECTORY CONTROL point i+1).During acoustic image tracing point P runs from current acoustic image region 1 to next acoustic image region 2, acoustic image The output level of each speaker node is domain 1dB(domain dB in region 1i), the output level of each speaker node in acoustic image region 2 For the domain domain 2dB(dBi+1), the speaker node output level other than the two acoustic image regions is zero or bears infinite.
Domain 1dB value=10logeη÷2.3025851
Domain 2dB value=10logeβ÷2.3025851
Wherein, l12The distance of acoustic image TRAJECTORY CONTROL point 2, l are arrived for acoustic image TRAJECTORY CONTROL point 11PFor acoustic image TRAJECTORY CONTROL point 1 To the distance of acoustic image tracing point P, lp2For current acoustic image tracing point P to the distance of acoustic image TRAJECTORY CONTROL point 2.It can be with from above-mentioned formula Finding out each acoustic image tracing point, there are two acoustic image region output levels, but when acoustic image tracing point is located at the control of each acoustic image track When system point, only one of which acoustic image region output level, such as when acoustic image tracing point P moves to acoustic image TRAJECTORY CONTROL point 2, There was only 2 output level of acoustic image region at this time, and the output level in acoustic image region 1 is zero.
When recording the output level value of each speaker node in variable domain acoustic image track, can continuously record, it can also be according to one Fixed frequency records.For the latter, refer to the output level numerical value that primary each speaker node is recorded at interval of certain time.? In the present embodiment, each speaker node when acoustic image is run along setting path is recorded in using 25 frames/second or 30 frames/second frequency Output level value.The output level data of each speaker node are recorded by certain frequency, it is possible to reduce data volume is accelerated to defeated Enter processing speed when audio signal carries out acoustic image trajectory processing, guarantees that acoustic image is run the real-time of effect.
As shown in figure 24, fixed point acoustic image track data can obtain by the following method:
S701: setting speaker node: in speaker distribution map, speaker node is added or deleted.
S702: modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to Road, initialize level, speaker title etc..
S703: setting acoustic image tracing point and total duration select one or more speaker node, institute in speaker distribution map Acoustic image tracing point is arranged in each speaker node residence time as acoustic image tracing point in selected each speaker node.
S704: record fixed point acoustic image track data: the output at each speaker node each moment in above-mentioned total duration is recorded Level numerical value.
In addition, acoustic image track data of the invention further includes speaker link data.Speaker link, which refers to, holds speaker node Row is operation associated, when the active speaker node output level being associated in speaker node, is associated with the passive sound box section of speaker node It puts automatic output level.It is passive sound box after being associated operation to several selected speaker nodes that speaker, which links data, Output level difference of the node relative to active speaker node.For it is necessary to link associated speaker node in spatial distribution Distance can relatively.
As shown in figure 25, speaker link data can obtain by the following method:
S801: setting speaker node: in speaker distribution map, speaker node is added or deleted.
S802: modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to Road, initialize level, speaker title etc..
S803: selected ultralow frequency speaker node setting speaker node link relationship: is connected to neighbouring multiple full ranges Speaker node;
S804: record speaker links data: the output level DerivedTrim of the ultralow frequency speaker is calculated and records, The output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10(Trim-i+LinkTrim-i)/10, wherein Trim-i is the output level value of any full-range cabinet node i itself, and LinkTrim-i is the full-range cabinet node i The level that links of original setting and the ultralow frequency speaker, DeriveddB are the initialize level value of the ultralow frequency speaker node, DerivedTrim is that the ultralow frequency speaker node sets link to the output level value after several full-range cabinet nodes. One ultralow frequency speaker node may be configured as linking to one or more full-range cabinet nodes, after link, when full-range cabinet node Output level, then the ultralow frequency speaker node linked with it will automatic output level, to cooperate the battalion of full-range cabinet node Make certain sound effect.Both for a ultralow frequency speaker node link to a full-range cabinet node, need to only consider Distance, source of sound property and required audio etc. can set ultralow frequency speaker node and the full-range cabinet node followed to broadcast automatically Output level when putting, i.e. link level.
As shown in figure 26, it is assumed that the ultralow frequency speaker node 4 in speaker distribution map links to 3 neighbouring full-range cabinets Node, itself output level value of full-range cabinet node 21,22,23 are respectively Trim1, Trim2 and Trim3, ultralow frequency speaker Node 24 originally with each full-range cabinet point 21,22,23 link level value be respectively LinkTrim1, LinkTrim2 with LinkTrim3.If it is Ratio that level, which is totally added ratio, itself the initialize level value of ultralow frequency speaker node 4 is DeriveddB, last 4 output level value of ultralow frequency speaker node are DerivedTrim, then have:
Ratio=10(Trim1+LinkTrim1)/10+10(Trim2+LinkTrim2)/10+10(Trim3+LinkTrim3)/10
DerivedTrim=10*log (Ratio)+DeriveddB
When Ratio is greater than 1, ultralow frequency speaker node 24 is linked to gained output level after these three full-range cabinet nodes It is 0, i.e., its final output level value is initialize level value.

Claims (8)

1. a kind of apparatus control method, characterized by comprising:
Time shaft is shown on the display interface of integrated control platform;
The track of addition and/or deletion for being controlled corresponding performing device, the track includes device track;
Editing rail attribute;
Add material;
Edit material attribute;
Integrated control platform issues corresponding control instruction according to each track attribute and its material attribute;
Audio track is added, one or more audio tracks that are parallel and being aligned in the time shaft are added on the display interface Road, each corresponding output channel of the audio track;
Audio track attribute is edited, editable audio track attribute includes that lock on track, track are mute;
Audio material is added, one or more audio materials are added in audio track, and generate and the sound in audio track The corresponding audio material of frequency material, the length and the total duration phase of the audio material of audio track occupied by the audio material Matching;
Edit audio material attribute, the audio material attribute include start position, final position, the time started, the end time, Total duration, play time length;
Audio sub-track is added, addition one or more audio sub-tracks corresponding with wherein one audio track are each described Audio sub-track is parallel to the time shaft, the output channel pair of the corresponding audio track of the audio sub-track It answers;The type of the audio sub-track includes acoustic image sub-track;
Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track, and in the acoustic image Generate corresponding with acoustic image material acoustic image material in sub-track, the length of acoustic image sub-track occupied by the acoustic image material and this Total duration corresponding to acoustic image material matches;
Edit acoustic image sub-track attribute;
Edit acoustic image material attribute, the acoustic image material attribute include start position, final position, the time started, the end time, Total duration, play time length;
The acoustic image material is acoustic image track data, and the acoustic image track data includes becoming rail acoustic image track data;
The change rail acoustic image track data obtains by the following method:
Setting speaker node: in speaker distribution map, speaker node is added or deleted;
Modification speaker nodal community: the attribute of speaker node includes speaker coordinate, speaker type, corresponding output channel, initialization Level and speaker title;
It divides delta-shaped region: according to the distribution of speaker node, speaker distribution map being divided into multiple delta-shaped regions, Mei Gesan Three vertex of angular domain are speaker node;
Setting acoustic image TRAJECTORY CONTROL point and path of running: the road of running that acoustic image changes over time is set in speaker distribution map Diameter, and several acoustic image TRAJECTORY CONTROL points run on path positioned at this;
Edit acoustic image TRAJECTORY CONTROL point attribute: the attribute of acoustic image TRAJECTORY CONTROL point includes acoustic image TRAJECTORY CONTROL point coordinate position, institute At the time of corresponding, to the time needed for next acoustic image TRAJECTORY CONTROL point;
Edit acoustic image TRAJECTORY CONTROL point it is corresponding at the time of, if at the time of acoustic image TRAJECTORY CONTROL point i original correspond to be ti, modify It is Ti at the time of corresponding to afterwards, is tj at the time of any acoustic image TRAJECTORY CONTROL point J original before acoustic image TRAJECTORY CONTROL point i corresponds to, adjusts It is Tj at the time of corresponding after whole, the acoustic image corresponding former total duration in path of running is t, and modified total duration is T, then Tj= Tj/ti* (Ti-ti), T=t+ (Ti-ti).
2. the apparatus according to claim 1 control method, which is characterized in that, can when editing described device track attribute The attribute of editor has mute including lock on track, track, and the mute attribute of track is for controlling whether described device track gives birth to Effect, the lock on track is for locking described device track.
3. the apparatus according to claim 1 control method, which is characterized in that in adding set track, can add parallel In one or more device tracks of the time shaft, the corresponding controlled device of each described device track.
4. apparatus control method according to claim 3, which is characterized in that need confirmation pair before adding set track Whether the controlled device answered has established connection with integrated control platform.
5. the apparatus according to claim 1 control method, which is characterized in that in adding set track, addition with wherein The corresponding one or more control sub-tracks of one described device track, each control sub-track time shaft in parallel, each institute It is corresponding to state controlled plant corresponding to the corresponding described device track of control sub-track.
6. apparatus control method according to claim 5, which is characterized in that, can be according to the sub- rail of control when adding material The control material of the type addition respective type in road, and generated on added control sub-track and control sub- material with corresponding Icon, control sub-track length and the total duration of the control material occupied by the control material icon match.
7. apparatus control method according to claim 6, which is characterized in that the type of the control sub-track includes TTL Sub-track, relay control sub-track, network-control sub-track are controlled, may be added to that the control material package of TTL control sub-track Include TTL high level control material, TTL low level control material, may be added to that network-control sub-track control material include after Electric appliance opens control material, relay closes control material, may be added to that the control material of network-control sub-track includes TCP/IP logical Letter control material, UDP communication control material, 232 communication control materials, 485 protocol communications control material etc..
8. the apparatus according to claim 1 control method, which is characterized in that the control instruction is corresponding for controlling The instruction of controlled device or the instruction that network data communication is carried out with corresponding controlled device.
CN201310754498.6A 2013-12-31 2013-12-31 Apparatus control method Active CN104754186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310754498.6A CN104754186B (en) 2013-12-31 2013-12-31 Apparatus control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310754498.6A CN104754186B (en) 2013-12-31 2013-12-31 Apparatus control method

Publications (2)

Publication Number Publication Date
CN104754186A CN104754186A (en) 2015-07-01
CN104754186B true CN104754186B (en) 2019-01-25

Family

ID=53593245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310754498.6A Active CN104754186B (en) 2013-12-31 2013-12-31 Apparatus control method

Country Status (1)

Country Link
CN (1) CN104754186B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937022B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 multi-professional collaborative editing and control method for audio, video, light and machinery
CN106937023B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 multi-professional collaborative editing and control method for film, television and stage
CN106937021B (en) * 2015-12-31 2019-12-13 上海励丰创意展示有限公司 performance integrated control method based on time axis multi-track playback technology
CN106547249B (en) * 2016-10-14 2019-03-01 广州励丰文化科技股份有限公司 A kind of mechanical arm console that speech detection is combined with local media and method
CN107026986B (en) * 2017-03-30 2019-11-29 维沃移动通信有限公司 A kind of processing method and mobile terminal of video background music

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1102273A2 (en) * 1999-11-16 2001-05-23 Nippon Columbia Co., Ltd. Digital audio disc recorder
CN101523498A (en) * 2006-08-17 2009-09-02 奥多比公司 Techniques for positioning audio and video clips
CN101916095A (en) * 2010-07-27 2010-12-15 北京水晶石数字科技有限公司 Rehearsal performance control method
CN102081946A (en) * 2010-11-30 2011-06-01 上海交通大学 On-line collaborative nolinear editing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1102273A2 (en) * 1999-11-16 2001-05-23 Nippon Columbia Co., Ltd. Digital audio disc recorder
CN101523498A (en) * 2006-08-17 2009-09-02 奥多比公司 Techniques for positioning audio and video clips
CN101916095A (en) * 2010-07-27 2010-12-15 北京水晶石数字科技有限公司 Rehearsal performance control method
CN102081946A (en) * 2010-11-30 2011-06-01 上海交通大学 On-line collaborative nolinear editing system

Also Published As

Publication number Publication date
CN104754186A (en) 2015-07-01

Similar Documents

Publication Publication Date Title
CN104754178B (en) audio control method
CN104754186B (en) Apparatus control method
CN106937022B (en) multi-professional collaborative editing and control method for audio, video, light and machinery
US10541003B2 (en) Performance content synchronization based on audio
US9142259B2 (en) Editing device, editing method, and program
CN108021714A (en) A kind of integrated contribution editing system and contribution edit methods
CN104750059B (en) Lamp light control method
CN104750058B (en) Panorama multi-channel audio control method
CN110139122A (en) System and method for media distribution and management
CN104750051B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
CN104754242B (en) Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image
CN104754244B (en) Panorama multi-channel audio control method based on variable domain audio-visual effects
CN104754243B (en) Panorama multi-channel audio control method based on the control of variable domain acoustic image
CN106937021B (en) performance integrated control method based on time axis multi-track playback technology
CN112967705A (en) Mixed sound song generation method, device, equipment and storage medium
CN106937023B (en) multi-professional collaborative editing and control method for film, television and stage
CN104751869B9 (en) Panoramic multi-channel audio control method based on orbital transfer sound image
Hamasaki et al. 5.1 and 22.2 multichannel sound productions using an integrated surround sound panning system
CN104750055B (en) Based on the panorama multi-channel audio control method for becoming rail audio-visual effects
CN104754241B (en) Panorama multi-channel audio control method based on variable domain acoustic image
CN106937204B (en) Panorama multichannel sound effect method for controlling trajectory
CN106937205B (en) Complicated sound effect method for controlling trajectory towards video display, stage
CN104754449B (en) Sound effect control method based on variable domain acoustic image
CN104754447B (en) Based on the link sound effect control method for becoming rail acoustic image
CN110308927A (en) It is laid out edition management system and management method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant