CN106937023A - Towards video display, the multi-specialized collaborative editing of stage and control method - Google Patents
Towards video display, the multi-specialized collaborative editing of stage and control method Download PDFInfo
- Publication number
- CN106937023A CN106937023A CN201511030264.2A CN201511030264A CN106937023A CN 106937023 A CN106937023 A CN 106937023A CN 201511030264 A CN201511030264 A CN 201511030264A CN 106937023 A CN106937023 A CN 106937023A
- Authority
- CN
- China
- Prior art keywords
- track
- audio
- acoustic image
- sub
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
It is specifically a kind of towards video display, the multi-specialized collaborative editing of stage and control method the present invention relates to device for performing arts control technology.The method includes:Time shaft is shown on the display interface of integrated control platform;The track for being controlled to corresponding performing device is added and/or deletes, the track includes light track;Editing rail attribute;Addition material;Editor's material attribute;Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.The present invention can solve current items on the program audio, video, light, machinery and the multi-specialized playback editor of special effect device and the technical barrier for cooperateing with Synchronization Control.
Description
Technical field
The present invention relates to device for performing arts control technology, it is specifically a kind of towards video display, stage it is how special
Industry collaborative editing and control method.
Background technology
In items on the program layout process, one compare distinct issues be each specialty (refer to audio,
Video, light, machinery etc.) between coordination and Synchronization Control.In large-scale performance, each is special
Industry is relatively independent, it is necessary to a huge troop of comparing could ensure the smooth layout of performance and perform.
And during each professional program of layout, the most of the time all spend coordination between specialty and
Above synchronous, and may be much more compared with the time being really absorbed in program in itself.
Because each specialty is relatively independent, control mode differs greatly.Sound to the scene of carrying out is regarded
Frequency synchro edit, video is controlled by lamp control platform, and audio plays back editor control, sound by many rails
Frequency is easy to navigate to the arbitrary time and starts playback, but video can only start anew (can be by operating
Frame number is adjusted to correspondence position manually, but timing code can not be followed to start by personnel), this is to scene
For performance control, enough flexibilities are short of.
Additionally, after the audio amplifier position of the Professional sound box system of existing video display, stage is fixed, leading to
The left and right passage master for crossing stage both sides amplifies or main the amplifying in the road of left, center, right three substantially sets acoustic image
Be scheduled on the middle position of stage, although performance site in addition to the master on stage amplifies, also each
Individual position is provided with large number of audio amplifier, but whole field is performed, and the acoustic image of sound box system is almost
Can seldom change.
Therefore, the editor and Synchronization Control and acoustic image for solving current items on the program run the spirit of effect
Control living is all the art key technical problem urgently to be resolved hurrily.
The content of the invention
Perform many present invention solves the technical problem that being to provide one kind and can simplify the occasions such as video display, stage
Specialty control and the multi-specialized collaboration that can carry out flexibly quickly setting to the acoustic image of sound reinforcement system are compiled
Collect and control method.
In order to solve the above technical problems, the technical solution adopted by the present invention is:A kind of multi-specialized collaboration
Editor and control method, the method include:Time shaft is shown on the display interface of integrated control platform;
The track for being controlled to corresponding performing device is added and/or deletes, the track includes audio
One or more in track, track of video, light track, device track;Editing rail attribute;
Addition material;Editor's material attribute;Integrated control platform sends according to each track attribute and its material attribute
Corresponding control instruction.
Compared with prior art, have the beneficial effect that:Playback master control is embodied " performance integrated management "
Theory.From for the angle of technology, the lotus root conjunction property of these units is very low, and they can be only
It is uniquely " time " than more prominent contact from work without influencing each other, i.e., when is broadcasting
What goes out.For the angle used from user, the relation of " time " is but that they are most concerned
Thing.Be tod with management, user if can be concentrated in together the state of these units and be checked
Save many unnecessary troubles.Such as coordinate the stationary problem between unit, in editing saving
When each specialty it is mutual with reference to and contrast amendment etc..
Brief description of the drawings
Fig. 1 is the multi-specialized collaborative editing and control method schematic diagram of embodiment.
Fig. 2 is the method for the audio frequency control part of the multi-specialized collaborative editing and control method of embodiment
Schematic diagram.
Fig. 3 is the operation side of the audio sub-track of the multi-specialized collaborative editing and control method of embodiment
Method schematic diagram.
Fig. 4 is the method for the video control portions of the multi-specialized collaborative editing and control method of embodiment
Schematic diagram.
Fig. 5 is the method for the signal light control part of the multi-specialized collaborative editing and control method of embodiment
Schematic diagram.
Fig. 6 is the method for the apparatus control portion point of the multi-specialized collaborative editing and control method of embodiment
Schematic diagram.
Fig. 7 is the multi-specialized collaborative editing of embodiment and the principle schematic of control control system.
Fig. 8 is that the principle of the audio frequency control module of the multi-specialized collaborative editing and control of embodiment is illustrated
Figure.
Fig. 9 is that the principle of the video control module of the multi-specialized collaborative editing and control of embodiment is illustrated
Figure.
Figure 10 is that the principle of the lighting control module of the multi-specialized collaborative editing and control of embodiment is shown
It is intended to.
Figure 11 is that the principle of the device control module of the multi-specialized collaborative editing and control of embodiment is shown
It is intended to.
Figure 12 is the multi-specialized collaborative editing of embodiment and many rails playback editor module of control method
Interface schematic diagram.
Figure 13 is the original of the audio frequency control part of the multi-specialized collaborative editing and control system of embodiment
Reason schematic diagram.
Figure 14 is the original of the track matrix module of the multi-specialized collaborative editing and control system of embodiment
Reason schematic diagram.
Figure 15 is the original of the video control portions of the multi-specialized collaborative editing and control system of embodiment
Reason schematic diagram.
Figure 16 is the original of the signal light control part of the multi-specialized collaborative editing and control system of embodiment
Reason schematic diagram.
Figure 17 is the original of the apparatus control portion point of the multi-specialized collaborative editing and control system of embodiment
Reason schematic diagram.
The step of Figure 18 is the acoustic image method for controlling trajectory of embodiment schematic diagram.
Figure 19 is the acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is one of audio amplifier distribution map and variable domain acoustic image track schematic diagram of embodiment.
Figure 21 be embodiment audio amplifier distribution map and variable domain acoustic image track schematic diagram two.
Specific embodiment
The acoustic image TRAJECTORY CONTROL all types of to the present invention is further described below in conjunction with the accompanying drawings.
The present embodiment provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can
With the acoustic image to sound reinforcement system carry out flexibly it is quick set towards video display, the multi-specialized collaboration of stage
Editor and control method.The method plays back editor module by many rails of integrated control platform, realizes to many
The concentration layout and control of individual professional material.
As shown in figure 1, above-mentioned towards video display, the multi-specialized collaborative editing and control method bag of stage
Include following steps:
S101:Time shaft is shown on the display interface of integrated control platform;
S102:Add and/or delete the track for being controlled to corresponding performing device, the rail
Road includes one or more in audio track, track of video, light track, device track;
S103:Editing rail attribute;
S104:Addition material;
S105:Editor's material attribute;
S106:Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, the multi-specialized collaborative editing and control method may be selected to include many rails
Audio playback control (corresponding with following audio frequency control modules), now, the method is comprised the following steps:
S201:Addition audio track, adds parallel on display interface and is aligned in the time shaft
One or more audio tracks (region) 1,2, each output of described audio track correspondence one is logical
Road.
S202:Editor audio track attribute, editable audio track attribute include lock on track,
Track is Jing Yin.Whether the Jing Yin attribute of track can control audio material and all sub-tracks on this track
It is Jing Yin, it is the master control of audio track.Except Jing Yin and addition is hidden on the controllable track of lock on track attribute
Hide outside the individual attributes such as sub-track is outer, material position and the material category in other attributes and audio track
Property can not be changed.
S203:Addition audio material, one or more audio materials are added in audio track 1,2
111st, 112,113,211,212,213,214, and generation and audio element in audio track
The corresponding audio material of material, the length of the audio track occupied by the audio material and the audio material
Total duration match.Before audio material is added, first audio material is obtained from audio server
List, then selectes audio material addition and enters audio track from the audio material list again.When
After audio material is added to audio track, the audio attribute text corresponding with the audio material will be generated
Part, integrated control platform controls to be sent to the instruction of audio server by editing audio attribute file,
Rather than directly invoking or edit the corresponding source file of audio material, it is ensured that the security of source file and
The stability of integrated control platform.
S204:Editor's audio material attribute, the audio material attribute includes start position, terminal
Position, time started, end time, total duration, reproduction time length.Wherein, the starting point
Position is the time shaft moment corresponding to the audio material start position (vertically), described
Final position is the time shaft moment corresponding to the audio material final position (vertically),
The time started be the audio material on a timeline actually commence play out moment, the end
Time is audio material physical end play position on a timeline.In general, when starting
Between can delay in start position, the end time can shift to an earlier date in final position.Total duration refers to audio element
The script time span of material, time difference of start position position to terminal be audio material it is total when
Long, reproduction time length refers to audio material reproduction time length on a timeline, during beginning
Between the reproduction time length of the audio material is with the time difference of end time.Started by adjustment
Time and end time can realize the shearing manipulation to acoustic image material, i.e., only play user and wish to broadcast
The part put.
Can change starting point by adjusting position of (transverse shifting) audio material in audio track
Position and final position, but relative position of the start position with final position on a timeline will not change
Become, i.e., the length of audio material will not change.Between at the beginning of by adjusting audio material and terminate
Time can change audio material actual play time on a timeline and its length.One audio
Can place multiple audio materials in track, represent within the time period represented by time shaft, can be with
(through corresponding output channel) plays multiple audio materials successively.It should be noted that any sound
Audio material position (time location) in frequency track can freely adjust, but each audio material it
Between should not overlap.
Further, because integrated control platform is that the corresponding property file of audio material is controlled,
Therefore integrated control platform can also carry out cutting operation and concatenation to audio material.Cutting operation is
An audio material on audio track is divided into multiple audio materials by finger, while each after segmentation
Individual audio material has each self-corresponding property file, and now source file is still intact, integrated control platform root
Send control command according to these new property files calls source file to be played accordingly and sound successively
Effect operation.Similar, concatenation refers to that two audio materials are merged into an audio material,
Its each self-corresponding property file merges into a property file, and control is sent by a property file
Audio server processed calls two audio source files.
Further, can also set corresponding with each audio track many respectively on integrated control platform
Group application entity operated key, to manually adjust the attribute of audio material by physical operation key.For example
Increase and the material of adjustment before and after the audio material position (time shaft position) in audio track is played
Time adjusts knob.
S205:Addition audio sub-track 12,13,14,15,21,22, addition and wherein one institute
Corresponding one or more the audio sub-tracks of audio track are stated, each audio sub-track is parallel to institute
State time shaft, the output channel correspondence of the corresponding audio track of the audio sub-track.
Each audio track can have attached audio sub-track, and the type of audio sub-track includes
Acoustic image sub-track and audio sub-track.Wherein, the acoustic image sub-track is used for affiliated audio track
Part or all of audio material carry out acoustic image trajectory processing, the audio sub-track is used for affiliated
The part or all of audio material of audio track carries out audio effect processing.In this step, can be further
Perform following steps:
S301:Addition acoustic image sub-track harmony pixel material, adds one or many in acoustic image sub-track
Individual acoustic image material 121,122, and sound corresponding with the acoustic image material is generated in the acoustic image sub-track
Corresponding to pixel material, the length of the acoustic image sub-track occupied by the acoustic image material and the acoustic image material
Total duration matches.
S302:Editor's acoustic image sub-track attribute, editable acoustic image similar with the audio track
Sub-track attribute includes that lock on track, track are Jing Yin.
S303:Editor's acoustic image material attribute, the acoustic image material category similar with the audio material
Property also include start position, final position, the time started, the end time, total duration, play when
Between length.
By the acoustic image material on acoustic image sub-track, can be in the acoustic image material time started and end
It is defeated to the output channel corresponding to the affiliated audio track of acoustic image sub-track in time period between time
The signal for going out carries out acoustic image trajectory processing.Therefore different types of acoustic image is added on acoustic image sub-track
Material, can carry out different types of acoustic image trajectory processing to the signal of corresponding output channel output;
And pass through to adjust start position, end time, time started and the end time of each acoustic image material,
Time and acoustic image path effect duration that acoustic image trajectory processing starts can be adjusted.
Acoustic image material is that what audio material was represented is voice data with the difference of audio material.Sound
As track data is within the time period of preseting length, in order that each virtual sound in audio amplifier distribution map
The acoustic image that case node output level is formed is run or is remained stationary as along path set in advance, each sound
The output level data that case node is changed over time.I.e. acoustic image track data contains audio amplifier distribution ground
Output level delta data of whole audio amplifier nodes within the preseting length time period in figure.Acoustic image rail
The type of mark data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image rail
Mark data, the type of acoustic image track data determines the type of acoustic image material, acoustic image track data institute
Corresponding acoustic image motion total duration determines the time between acoustic image material start position and final position
Difference, the i.e. total duration of acoustic image material.Acoustic image trajectory processing refer to according to acoustic image track data pair with it is each
The size of corresponding each audio amplifier entity reality output level of individual audio amplifier node is adjusted, and makes audio amplifier
The acoustic image of physical system is run or is remained stationary as within the time period of preseting length along set path.
S304:Addition audio sub-track, the type of the audio sub-track includes volume and gain
Track 13,22, EQ sub-tracks 14,15,21, each audio track can be set a volume and
Gain sub-track, and one or more EQ sub-tracks.Wherein, the volume and gain sub-track
It is adjusted for the signal level size to the corresponding output channel of affiliated audio track, the EQ
Sub-track is used to carry out EQ audios to the signal of the output of the corresponding output channel of affiliated audio track
Treatment.
S305:The attribute of audio sub-track is edited, the attribute of the audio sub-track includes track locks
Fixed, track is Jing Yin and track identities outside, also including audio effect processing corresponding with audio sub-track type
Parameter.For example, the audio effect processing parameter that volume and gain sub-track are included is the big ditty of output level
Section parameter, the audio effect processing parameter that EQ sub-tracks are included is EQ processing parameters.By changing audio
The sound effect parameters of sub-track can adjust the affiliated audio track correspondence output channel of the audio sub-track
Audio.
S206:Preserve data, or according to audio track and its attribute of sub-track, audio material and
Acoustic image material attribute generates the control instruction to audio material correspondence source file, and is referred to according to the control
Is made and the source file of audio material is played out control and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call the audio source file of (broadcasting) audio material, source document
Part play at the beginning of between and the end time (being defined by the moment of time shaft), the acoustic image of source file and
Audio effect processing, attribute of the specific control instruction with each audio track and its attached sub-track, audio
The attribute correspondence of material, acoustic image material.That is audio track does not directly invoke and processes sound
The source file of frequency material, and the property file corresponding to the audio source file is simply processed, by compiling
Collect property file, addition/editor acoustic image material and audio track and its sub- rail of adjustment source file
The attribute in road realizes the indirect control to audio source file.
Audio material for example added to audio track will be into playlist, and the audio track starts
During broadcasting, the audio material will be played;By editing audio track attribute, sound can be controlled
The Jing Yin attribute of frequency track, can control the audio track and its whether attached sub-track is Jing Yin (has
Imitate), attribute is locked by editing audio, on controllable track in addition to the hiding sub-track of Jing Yin and addition
Outside etc. individual attribute, material position and material attribute in other attributes and audio track can not repair
Change (lock-out state).More detailed description refers to narration above.
As shown in Fig. 4 and Figure 12, the multi-specialized collaborative editing and control method of the present embodiment can be with
Selection increases video playback control (corresponding with following video control modules), specifically includes following steps:
S401:Addition track of video, (on display interface) addition is parallel and is aligned in the time
The track of video 4 (region) of axle, one controlled plant of the track of video correspondence, uses in the present invention
Video server.
S402:Editor track of video attribute, editable track of video attribute include lock on track,
Track is Jing Yin.Track of video attribute is similar with audio track attribute.
S403:Addition video material, added in track of video one or more video materials 41,
42nd, 43,44, and video material corresponding with the video material is generated in track of video, this is regarded
The length of the track of video occupied by frequency material matches with the total duration of the video material.In addition
Before video material, video material list first is obtained from video server, then again from video element
Video material addition is selected in material list and enters track of video.When video material is added to track of video
Afterwards, the video attribute file corresponding with the video material will be generated, integrated control platform is regarded by editor
Frequency property file controls to be sent to the instruction of video server, is regarded rather than directly invoking or editing
The corresponding source file of frequency material, it is ensured that the stability of the security of source file and integrated control platform.
S404:Editor's video material attribute, the video material attribute includes start position, terminal
Position, time started, end time, total duration, reproduction time length.Video material attribute with
Audio material attribute is similar to, while audio material can also carry out transverse shifting, cut and splicing behaviour
Make, or increase an adjustment group object operated key corresponding with track of video on integrated control platform,
To manually adjust the attribute of video material by physical operation key.
S405:Data are preserved, or according to track of video attribute, the attribute of video material is generated to regarding
The control instruction of frequency material correspondence source file, and according to the control instruction to the source file of video material
Play out control and acoustic image, audio effect processing are controlled.It is similar with track of video, specific control instruction
The attribute of audio track, the attribute correspondence of video material.
As depicted in figure 5 and figure 12, the multi-specialized collaborative editing and control method of the present embodiment can be with
Selection increases signal light control (corresponding with following lighting control modules), specifically includes following steps:
S501:Addition light track, (on display interface) addition is parallel and is aligned in the time
The light track 3 (region) of axle, one controlled plant of the light track correspondence, uses in the present invention
Light network signal adapter (such as Artnet network interface cards).
S502:Editor light track attribute, editable light track attribute include lock on track,
Track is Jing Yin.Light track attribute is similar with audio track attribute.
S503:Addition light material, added in light track one or more light materials 31,
32nd, 33, and light material corresponding with the light material is generated in light track, light element
The length of the light track occupied by material matches with the total duration of the light material.With audio material,
Video material is similar to, and light track does not load light material, and simply generates and light element
The corresponding property file of material source file, sends control instruction to control light material by property file
The output of source file.
Light material is the light network control data of certain hour length, such as Artnet data, is somebody's turn to do
Artnet data are packaged with DMX data.Light material can be generated in the following manner:Conventional lights control
After the good light program of platform layout, integrated control platform is connected to conventional lights control platform by its light network interface
On light network interface, record the lamp control platform output light controling signal, while integrated control platform
Need to stamp timing code to the light controling signal recorded in recording process, so as in light rail
The enterprising edlin control in road.
S504:Editor's light material attribute, the light material attribute includes start position, terminal
Position, time started, end time, total duration, reproduction time length.Light material attribute with
Audio material attribute is similar to, while audio material can also carry out transverse shifting, cut and splicing behaviour
Make, or increase an adjustment group object operated key corresponding with light track on integrated control platform,
To manually adjust the attribute of light material by physical operation key.
S505:Data are preserved, or according to light track attribute, the attribute of light material is generated to lamp
The control instruction of light material correspondence source file, and according to the control instruction to the source file of video material
Play out control and acoustic image, audio effect processing are controlled.It is similar with track of video, specific control instruction
The attribute of audio track, the attribute correspondence of video material.
As shown in Fig. 6 and Figure 12, the multi-specialized collaborative editing and control method of the present embodiment can be with
Selection increases device control (corresponding with following apparatus control module), specifically includes following steps:
S601:Adding set track, (on display interface) addition is parallel to the one of the time shaft
Individual or multiple device tracks 5 (region), each described device track one controlled device of correspondence, for example
Mechanical device.Needed to confirm that controlled device is set up with integrated control platform before adding set track
Connection.Integrated control platform and controlled device can be set up by TCP and connected, such as by integrated control platform
It is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP visitors of controlled device
The TCP server of integrated control platform is actively connected to after the access network of family end.
S602:Editing device track attribute, editable device track attribute include lock on track,
Track is Jing Yin.Device track attribute is similar with audio track attribute, if the selection of device track is Jing Yin,
Then the attached control sub-track of the whole of device track does not perform any operation.
S603:Addition control sub-track, add it is corresponding with a wherein described device track one or
Multiple control sub-tracks, each control sub-track parallel (and for) time shaft, each institute
State the controlled plant correspondence corresponding to the corresponding described device track of control sub-track.
S604:Addition control material, the type according to control sub-track adds the control of respective type
Material, and generated on the control sub-track for being added and control accordingly sub- material, the control material
Occupied control sub-track length matches with the total duration of the control material.
Controlling the type of sub-track includes TTL controls sub-track, Control sub-track, network
Control sub-track, accordingly, may be added to that the control material of TTL control sub-tracks includes TTL elements
Material 511,512,513 (such as TTL high level control material, TTL low level controls material), can
Control material added to relay sub-track includes relay material 521,522,523,524 (such as
Relay opens control material, relay and closes control material), may be added to that network controls the control of sub-track
Material processed includes that (such as tcp/ip communication control material, UDP lead to network materials 501,502,503
Letter control material, 232 Control on Communication materials, 485 protocol communications control material etc.).By addition
Sub- material is controlled accordingly, corresponding control instruction can be sent, and it is substantially exactly to control to control sub- material
System instruction.
S605:The sub- material attribute of editor control, attribute include start position, final position, it is total when
It is long.Control position of the sub- material in corresponding control sub-track can be with by adjusting (transverse shifting)
Change start position and final position, but relative position of the start position with final position on a timeline
Putting to change, i.e., the length of audio material will not change.The start position of material is controlled to be out
Originate out the time shaft moment to corresponding controlled device with the corresponding control instruction of control material, knot
Beam position is the time shaft moment for terminating sending control instruction.
Further, association can also be set between the control material in same control sub-track to close
System, makes to control the corresponding control command of material earlier positioned at the start position corresponding time shaft moment
It is not carried out successfully, then (integrated control platform) will not be sent or (controlled device) starting point position is not performed
Put the later association control corresponding control instruction of material of corresponding time shaft moment, such as curtain is opened
Conjunction, elevating control.
Further, the guard time of certain time length can be set before and after the control material of control sub-track,
Control material can not be added in guard time in control sub-track or control command can not be sent.
S606:Data are preserved, or according to control track and its attribute of control sub-track, control element
The attribute generation control instruction of material, and the control instruction is sent to corresponding controlled device.
Additionally, the present embodiment also provides a kind of multi-specialized system editor and control system (is performed integrated
Control system), as shown in fig. 7, the system includes integrated control platform 70, and selection includes audio
Server 76, video server 77, lighting control module 78 and device control module 79.Wherein,
The integrated control platform 70 includes Multi-track editing playback module 71, the Multi-track editing playback module 71
The executable integrated control control method sound intermediate frequency control of above-mentioned performance, video control, signal light control and
One or more control in device control, concrete implementation step will not be repeated here.It is described many
Rail playback editor control module includes audio frequency control module 72, and selection includes video control module
73rd, lighting control module 74 and device control module 75.
As shown in figure 8,72 pieces of the audio frequency control mould includes audio track add module 81, sound
Frequency track attributes edit module 82, audio material add module 83, audio material attributes edit module
84th, audio sub-track add module 85, holding data/output audio frequency control instruction module 86, this
The function that a little modules are realized is corresponded with abovementioned steps S201 to S206 respectively, herein no longer
Repeat, similarly hereinafter.
Further, the audio of the multi-specialized system editor and control system plays control principle such as
Shown in Figure 13, the integrated control also includes quick playback editor module, is physically entered module, many
Rail plays back editor module, real-time edition audio material is used for described in quick playback editor module, concurrently
Go out corresponding control instruction and play the corresponding source file of audio material, the thing to audio server 76
Reason input module corresponds to the physical operations key on integrated control platform 71, for outside input set into control
The source of sound of platform carries out real-time tuning control.
Accordingly, audio mixing matrix module, track matrix module, 3x1 are provided with the audio server
Output mix module and physics output module, the audio mixing matrix module can be received from described quick
Playback editor module, many rails playback editor module are taken by the audio that control command form is called
Audio source file in business device and the audio signal that exports, and described it is physically entered module output
Audio signal, the similar track matrix module can also receive above-mentioned each road audio input.It is described
Audio mixing matrix is used to be exported to the output audio mixing mould after carrying out each road audio input stereo process
Block, the track matrix module be used to carrying out each road audio input exporting after acoustic image trajectory processing to
The output mix module.The output mix module can be received from audio mixing matrix module, track
Matrix module and the audio output for being physically entered module, it is defeated through the physics after 3x1 stereo process
Go out each physics output interface output of module.Wherein, acoustic image trajectory processing refers to according to acoustic image rail
Mark data are adjusted to the level exported to each audio amplifier entity, make the acoustic image of audio amplifier physical system
Run or remain stationary as along set path within the time period of preseting length.
In the present embodiment, the source file of audio material is stored in the audio service outside integrated control platform
On device, many rail playback editor modules do not directly invoke and process the source file of audio material, and only
It is to process the property file corresponding to the audio source file, by the attribute text for editing adjustment source file
The attribute of part, addition/editor's acoustic image material and audio track and its sub-track is realized to audio-source
The indirect control of file, thus the output of each audio track corresponding output channel only for sound
Control signal/the instruction of frequency source file, is then held by receiving the audio server of the control instruction again
The various treatment of row audio source file.
As shown in figure 14, many rail playback editor modules are received effectively from audio server 76
Audio material list, audio source file is not processed directly, and audio source file is stored in the audio
In server, after corresponding control command is received recalling audio source file is carried out at various audios
Reason, for example, carry out stereo process into audio mixing matrix module, and track is carried out into track matrix module
Treatment.Acoustic image material is actually also control command, can both be stored in integrated control platform 71, also may be used
To be uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video add module 91, regards
Frequency track attributes edit module 92, video material add module 93, video material attributes edit module
94th, data/output video control instruction module 95 is kept, the function difference that these modules are realized
Corresponded with abovementioned steps S401 to 405.
Further, the video editing of the multi-specialized system editor and control system is controlled with broadcasting
As shown in figure 15, the integrated control platform does not perform the source file of video material directly to principle, but
Send control to video server and refer to by obtaining video material list and corresponding property file
Order, video server connects to perform the source file of video material further according to control instruction and plays and effect
Operation.
As shown in Figure 10, the lighting control module 74 include light track add module 110,
Light track attributes edit module 120, light material add module 130, light material attributes edit
Module 140, holding data/output signal light control instruction module 150, the work(that these modules are realized
Can be corresponded with abovementioned steps S501 to 505 respectively.
Further, signal light control principle such as Figure 16 of the multi-specialized system editor and control system
Shown, the integrated control platform is additionally provided with light signal and records module, for recording the output of lamp control platform
Light controling signal, and timing code is stamped to the light controling signal recorded in recording process,
So as in the enterprising edlin control of light track.
As shown in figure 11, described device control module 75 include device track add module 151,
Device track attributes edit module 152, control sub-track add module 153, control material addition mould
Block 154, control material attributes edit module 155, holding data/output signal light control instruction module
156, the function that these modules are realized is corresponded with abovementioned steps S601 to 606 respectively.
Further, device control principle such as Figure 17 of the multi-specialized system editor and control system
Shown, all kinds of device control signals of the integrated control platform output are through each association on device adapter
View interface is exported to corresponding controlled plant.
Additionally, the integrated control platform can also be included for making (generation) acoustic image track data (i.e.
Acoustic image material) acoustic image track data generation module, the acoustic image track data obtained through the module can
Called for many rail playback editor's performing modules, so as to control audio server track matrix
Module is controlled to acoustic image track.Further, the present embodiment provides a kind of acoustic image TRAJECTORY CONTROL
Method, the control method is by control main frame (such as integrated control platform, audio server) to entity sound
The output level value of each audio amplifier node of case system is configured, make acoustic image setting total duration in by
The mode of setting is moved or static.As shown in figure 18, the control method includes:
S181:Generation acoustic image track data;
S182:In the total duration corresponding to the acoustic image track data, according to acoustic image track data, adjust
The output level of whole each audio amplifier entity;
S183:In the total duration, will be input into the incoming level of each audio amplifier physical signal and corresponding
The output level of audio amplifier entity is overlapped the level for obtaining each audio amplifier entity reality output.
Acoustic image track data refer within the time period of preseting length (i.e. the lasting total duration of acoustic image),
In order that each virtual audio amplifier node output level institute shape in audio amplifier distribution map virtual on integrated control platform
Into acoustic image run or remain stationary as along path set in advance, what each audio amplifier node was changed over time
Output level data.Whole audio amplifier nodes exist during i.e. acoustic image track data contains audio amplifier distribution map
Output level delta data in the preseting length time period.For each audio amplifier node,
Its output level size is to change over time and change in setting time section, it is also possible to be
0th, negative is even negative infinite, preferential using negative infinite.
Each audio amplifier node corresponds to an audio amplifier entity in entity sound box system, each audio amplifier reality
Body includes one or more audio amplifiers at co-located place.I.e. each audio amplifier node can correspond to one
Individual or multiple co-located audio amplifiers.In order that entity sound box system can accurately reappear
Acoustic image path, the position distribution of the virtual audio amplifier node of each sound in audio amplifier distribution map should be with
Entity sound box system each audio amplifier provider location distribution correspondence, in particular so that between each audio amplifier node
Relative position relation, it is corresponding with the relative position relation between each audio amplifier entity.
The level of audio amplifier entity reality output is in level and the above-mentioned acoustic image track data of input signal
The output level superposition gained of audio amplifier node corresponding with the audio amplifier entity.The former is input signal
Characteristic, the latter can be considered as the audio amplifier entity characteristic of itself.At any one time, different input
Signal just has different incoming levels, and for same audio amplifier entity, only one of which output level.
It is, therefore, understood that acoustic image trajectory processing is the output level treatment to each audio amplifier entity, with
Form default acoustic image path effect (including acoustic image transfixion).
Incoming level and the output level superposition of audio amplifier entity can actually enter audio amplifier in audio signal
First processed before entity, it is also possible to processed again after audio amplifier entity is entered, this depends on
Constituted in the link of whole public address system, and whether audio amplifier entity is built-in with Audio Signal Processing mould
Block, such as DSP unit.
The type of acoustic image track data includes:Fixed point audio-visual-data, change rail acoustic image track data and change
Domain acoustic image track.When generation acoustic image track data is simulated on integrated control platform, for convenience to acoustic image
Speed and process be controlled, the embodiment of the present invention is by discrete distribution in audio amplifier distribution map
The line segment being sequentially connected between some acoustic image TRAJECTORY CONTROL points represents the path of running of acoustic image, that is, lead to
Several acoustic image TRAJECTORY CONTROL points of discrete distribution are crossed to determine the path of running of acoustic image, and acoustic image
Overall running time.
Fixed point acoustic image, refers to selected in audio amplifier distribution map within the time period of preseting length
Individual or multiple audio amplifier nodes constantly output level, and unselected audio amplifier node output level numerical value
It is zero or negative infinite situation.Correspondingly, audio-visual-data is pinpointed, refers to the time in preseting length
In section, one or more the audio amplifier nodes constantly output level selected in audio amplifier distribution map, and
Unselected audio amplifier node not output level, or output level numerical value is when being zero or negative infinite, each
The output level data that audio amplifier node is changed over time.For the audio amplifier node selected, in the setting
Its output level is continuous (being likely to upper and lower fluctuating change) in time;And for unselected
Audio amplifier node, its output level remains negative infinite in the setting time.
Become rail acoustic image, refer within the time period of preseting length, in order that acoustic image runs along preset path
It is dynamic, each audio amplifier node according to certain rule output level situation.Correspondingly, rail acoustic image track is become
Data, refer within the time period of preseting length, in order that acoustic image is run along preset path, each sound
The output level data that case node is changed over time.The path and need not be exactly accurate of running of acoustic image,
And acoustic image motion (running) duration will not be very long, it is only necessary to which substantially building audience can know
Other acoustic image is run effect.
Variable domain acoustic image, refers within the time period of preseting length, in order that acoustic image runs along predeterminable area
It is dynamic, the situation that the output level of each audio amplifier node changes according to certain rule.Correspondingly, variable domain sound
As track data refers to the time in preseting length, in order that acoustic image is run along predeterminable area, each sound
The output level data that case node is changed over time.
As shown in figure 19, the variable domain acoustic image track data of the present embodiment can be obtained by the following method
Arrive:
S1901:Audio amplifier node is set:In audio amplifier distribution map 10, audio amplifier section is added or deleted
Point.
S1902:Modification audio amplifier nodal community:The attribute of audio amplifier node includes audio amplifier coordinate, audio amplifier class
Type, correspondence output channel, initialization level, audio amplifier title etc..Audio amplifier node is distributed ground in audio amplifier
Represented with audio amplifier icon on figure, its coordinate position can be changed by Mobile loudspeaker box icon.Audio amplifier
Type refers to full-range cabinet or ultralow frequency audio amplifier, and particular type can be divided according to actual needs.
Each audio amplifier node in audio amplifier distribution map is all assigned an output channel, each output channel
Corresponding to an audio amplifier entity in entity sound box system, each audio amplifier entity includes being located at same position
Put one or more audio amplifiers at place.I.e. each audio amplifier node can correspond to one or more positioned at same
The audio amplifier of position.Run path, audio amplifier reality to reappear acoustic image designed in audio amplifier distribution map
The position distribution of body should be corresponding with the position distribution of audio amplifier node in audio amplifier distribution map.
S1903:Divide acoustic image region and set acoustic image and run path:
Certain point is selected in audio amplifier distribution map as center S0, and sound is added at the S0 of center
Case node, then divides several concentric circular regions by the center of circle of center S0, and diameter is maximum with one heart
Circle can completely or partially cover the audio amplifier node in audio amplifier distribution map.The minimum concentric circles of diameter
Area encompassed is set as acoustic image region Z1, and the region between neighboring concentric circle is from inner outside difference
Acoustic image region Z2, Z3, Z4 ... ZN (N is the natural number more than or equal to 2) is set to, i.e., directly
Region between the minimum concentric circles in footpath and diameter small concentric circles second from the bottom is set as acoustic image region
Z2, the region between diameter small concentric circles second from the bottom and diameter small concentric circles third from the bottom sets
It is set to acoustic image region Z3, (refers to Figure 20) by that analogy.
It is that multiple that the outside radiant type of starting point spreads is run path, path of respectively running to set with center S0
It is set to travel through the straight-line segment in each acoustic image region.These paths of running can completely or partially cover
All or part of audio amplifier node in above-mentioned concentric circular regions, preferably covers above-mentioned concentric circular regions
In whole audio amplifier nodes.
The terminal in these paths of running is within above-mentioned concentric circular regions or audio amplifier node in addition, is run
S0 centered on the starting point in dynamic path, and terminal is according to audio amplifier Node distribution on the path direction of running
It is different:(1) there is the audio amplifier on the outside of diameter maximum concentric circles on path direction if this is run
Node, then the terminal in the path of running is to be run on path direction at this, maximum with one heart positioned at diameter
Circle outside and the audio amplifier node minimum with diameter maximum concentric circles distance.(2) path if this is run
The audio amplifier node on the outside of diameter maximum concentric circles is not on direction, then the terminal in the path of running
It is the farthest audio amplifier nodes of distance center S0 on the path direction of running.
These at least one audio amplifier nodes of path of running, it is possibility to have two or more audio amplifier sections
Point.
Refering to Figure 20, the audio amplifier distribution map is provided with four concentric circles centered on S0, this
Four concentric circles are divided into 4 acoustic image regions Z1, Z2, Z3, Z4, and the acoustic image of setting is run path
Have 13, respectively S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11,
S12, S13, the whole audio amplifiers in concentric circular regions (region that diameter maximum concentric circles is included)
Node has corresponding path of running to pass through.Wherein, run path S2, S3, S5, S8, S9,
, just through an audio amplifier node, run path S1, S4, S7 are passed through for S10, S11, S12, S13
Two audio amplifier nodes.
Refering to Figure 21, further, when selected certain point is used as center, certain can be directly selected
One audio amplifier node is used as center.The audio amplifier distribution map of Figure 21 is provided with being with audio amplifier node S0 '
5 concentric circles of the heart, are divided into 5 acoustic image region Z1 ', Z2 ', Z3 ', Z4 ', Z5 ',
And some acoustic images are run path.Wherein, acoustic image is run on path S1 ', S2 ', S3 ' respectively
There are 3 audio amplifier nodes, at current time, the sound that acoustic image is run on path S1 ', S2 ', S3 '
As tracing point P41, P42, P43 are respectively positioned in the Z4 ' of acoustic image region.
S1904:Editor's acoustic image zone time attribute, including it is moment corresponding to acoustic image region, current
Acoustic image region is to the time required to next acoustic image region and acoustic image is run total duration.Acoustic image region belongs to
Property editor with become rail acoustic image tracing point attributes edit it is similar.If to corresponding to a certain acoustic image region
Moment modifies, then the whole acoustic image regions each self-corresponding moment before the sound area domain, and
The total duration that acoustic image is run is required for being adjusted.If to a certain acoustic image region to next acoustic image region
The required time is adjusted, then the moment corresponding to next acoustic image region, and acoustic image is run
Total duration is required for being adjusted.If modification acoustic image is run total duration, then the acoustic image is run path
On each acoustic image region corresponding to moment and its time to needed for next acoustic image region will all enter
Row adjustment.
S1905:Record variable domain acoustic image track data, records each audio amplifier node and is run along setting in acoustic image
Path is run by during each acoustic image region successively, the output level numerical value at each moment.
The output level value of a certain moment correlation audio amplifier node can be by the following method calculated.
Assuming that:Acoustic image total duration of running is set to T, and the acoustic image tracing point respectively run on path is total at this
Moved to terminal from its starting point in duration T, or moved to starting point from its terminal, path of respectively running
Acoustic image translational speed can be with identical or differ.
At a time t, the current acoustic image tracing point Pj that a certain acoustic image is run in path P is along certain
Individual acoustic image region Zi is moved to next acoustic image region Zi+1.The route of path P of being run in the acoustic image
On, the adjacent audio amplifier node k inside and outside acoustic image tracing point Pj, the output of audio amplifier node k+1
Level is respectively dBm, dBm+1, and the acoustic image is run in path P in addition to the two audio amplifier nodes
The output level of audio amplifier node is zero or negative infinite.Audio amplifier node k, audio amplifier node k+1 are
Run audio amplifier node on path positioned at the acoustic image, wherein, audio amplifier node K is positioned at the acoustic image rail
On the inside of mark point Pj audio amplifier node (near center S0 side), audio amplifier node K+1 be positioned at
Audio amplifier node on the outside of the acoustic image track (away from the side of center S0).
So, in current a certain moment t, the acoustic image tracing point Pj that the acoustic image is run in path P from
Current acoustic image region Zi runs to next acoustic image region Zi+1,
The output level dBm=10log of audio amplifier node keη÷2.3025851
The output level dBm+1=10log of audio amplifier node k+1eη÷2.3025851
Wherein, l12It is audio amplifier node k to the distance of audio amplifier node k+1, l1PFor audio amplifier node k is arrived
The distance of current acoustic image tracing point Pj, lp2It is current acoustic image tracing point Pj to audio amplifier node k+1's
Distance.From above-mentioned formula as can be seen that each the acoustic image tracing point respectively run on path has two
Audio amplifier node output level, but when acoustic image tracing point is located just at audio amplifier node, only wherein
One audio amplifier node output level.
Refering to Figure 20, it is four of the center of circle same that the audio amplifier distribution map is provided with 4 Ge Yi centers S0
The heart is justified, and audio amplifier node is provided with the S0 of center.These concentric circles be divided into 4 acoustic image region Z1,
Z2, Z3, Z4, and be provided with 13 acoustic images run track S1, S2, S3, S4, S5, S6,
S7, S8, S9, S10, S11, S12, S13, wherein, wherein, acoustic image run path S1, S4,
There are 3 audio amplifier nodes on S7, S10 respectively, remaining acoustic image is run on path then only 2 each respectively
Audio amplifier node.
At current time, acoustic image run path S1, S2, S3, S4 current acoustic image tracing point P31,
P32, P33, P34 run into the Z4 of acoustic image region.The acoustic image movement on path if each acoustic image is run
Speed is identical, then these current acoustic image tracing points are located in the circle with S0 as the center of circle.Now, exist
Each acoustic image is run on path, and the audio amplifier node inside and outside only current acoustic image tracing point has defeated
Go out level, the output level of remaining audio amplifier node is zero or negative infinite.With current acoustic image track P31
As a example by, the adjacent audio amplifier node in its inner side is the sound in the upper right side in the Z2 of acoustic image region in Figure 20
Case node (the audio amplifier node in Figure 20 is represented with small circle), the audio amplifier node of outside jingle bell is position
The audio amplifier node in upper right side on the outside of the Z4 of acoustic image region, this audio amplifier node is also that the acoustic image is run path
The terminal of S1.Now, acoustic image only both of the aforesaid audio amplifier node on the S1 of path of running has level defeated
Go out, and the output level for being located at the audio amplifier node of center S0 is zero or negative infinite.
In the output level of record variable domain acoustic image track each moment in total duration T each audio amplifier node
During value, can continuously record, it is also possible to recorded according to certain frequency.For the latter, refer to
The output level numerical value of each audio amplifier node is recorded once at interval of certain hour.In the present embodiment,
Each audio amplifier when acoustic image is run along set path is recorded using the frequency of 25 frames/second or 30 frames/second
The output level value of node.The output level data of each audio amplifier node are recorded by certain frequency, can be with
Data volume is reduced, accelerates the processing speed when acoustic image trajectory processing is carried out to input audio signal,
Ensure that acoustic image is run the real-time of effect.
Claims (10)
1. a kind of towards video display, the multi-specialized collaborative editing of stage and control method, it is characterised in that
Including:
Time shaft is shown on the display interface of integrated control platform;
The track for being controlled to corresponding performing device is added and/or deletes, the track includes
One or more in audio track, track of video, light track, device track;
Editing rail attribute;
Addition material;
Editor's material attribute;
Integrated control platform sends corresponding control instruction according to each track attribute and its material attribute.
2. according to claim 1 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that when the track attribute is edited, editable attribute has including track
Locking, track are Jing Yin, and the Jing Yin attribute of track is used to control whether the track comes into force, described
Lock on track is used to lock the track.
3. according to claim 1 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that when material is added, material to the selected track can be added, and
Material icon corresponding with the material, the track length occupied by the material icon are generated in track
Total duration with the material matches.
4. according to claim 3 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that when material attribute is edited, edlin can be entered to the attribute of the material,
The material attribute include start position, final position, the time started, the end time, total duration,
Reproduction time length.
5. according to claim 2 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that the material is light material, the generation method of the light material is:
The integrated control platform records the light controling signal of lamp control platform output, and to being recorded in recording process
The light controling signal of system stamps timing code so as to form the light material, the light material
Total duration be recording start signal when timing code with terminate recording signal when timing code difference.
6. according to claim 1 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that when adding and/or deleting audio track, including:
Addition audio track, adds parallel and being aligned in the time shaft one on display interface
Or multiple audio tracks, each one output channel of the audio track correspondence;
Editor's audio track attribute;
Addition audio material, adds one or more audio materials in audio track, and in audio
Corresponding with audio material audio material icon is generated in track, occupied by the audio material icon
The length of audio track match with the total duration of the audio material;
Editor audio material attribute, the audio material attribute include start position, final position,
Time started, end time, total duration, reproduction time length.
7. according to claim 6 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that methods described also includes:
Addition audio sub-track, adds one or more sounds corresponding with audio track wherein described in
Frequency sub-track, each audio sub-track parallel to the time shaft, the audio sub-track and its
The output channel correspondence of the corresponding audio track, the type of the audio sub-track includes acoustic image
Sub-track and audio sub-track.
8. according to claim 7 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that methods described also includes:
Addition acoustic image sub-track harmony pixel material, adds one or more acoustic images in acoustic image sub-track
Material, and acoustic image material icon corresponding with the acoustic image material is generated in the acoustic image sub-track, should
The length of the acoustic image sub-track occupied by acoustic image material icon and the total duration corresponding to the acoustic image material
Match;
Editor's acoustic image sub-track attribute;
Editor acoustic image material attribute, the acoustic image material attribute also bag start position, final position,
Time started, end time, total duration, reproduction time length.
9. according to claim 7 towards video display, the multi-specialized collaborative editing of stage and control
Method, it is characterised in that methods described also includes:
Addition audio sub-track;
The attribute of audio sub-track is edited, the attribute of the audio sub-track includes audio effect processing parameter,
The affiliated audio track pair of audio sub-track can be adjusted by the sound effect parameters for changing audio sub-track
Answer the audio of output channel.
10. audio control method according to claim 9, it is characterised in that:
The type of the audio sub-track includes volume and gain sub-track, EQ sub-tracks, each sound
Frequency track can be set a volume and gain sub-track, and one or more EQ sub-tracks, described
Volume and gain sub-track are used for the signal level size to the corresponding output channel of affiliated audio track
It is adjusted, the EQ sub-tracks are used for the output of the corresponding output channel of affiliated audio track
Signal carries out EQ audio effect processings;
Before audio material is added, first audio material list, Ran Houzai are obtained from audio server
Audio material addition is selected from the audio material list and enters audio track.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511030264.2A CN106937023B (en) | 2015-12-31 | 2015-12-31 | multi-professional collaborative editing and control method for film, television and stage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511030264.2A CN106937023B (en) | 2015-12-31 | 2015-12-31 | multi-professional collaborative editing and control method for film, television and stage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106937023A true CN106937023A (en) | 2017-07-07 |
CN106937023B CN106937023B (en) | 2019-12-13 |
Family
ID=59444734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201511030264.2A Active CN106937023B (en) | 2015-12-31 | 2015-12-31 | multi-professional collaborative editing and control method for film, television and stage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106937023B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111432259A (en) * | 2020-03-13 | 2020-07-17 | 阿特摩斯科技(深圳)有限公司 | Large-scale performance control system based on time code synchronization |
CN111654737A (en) * | 2020-06-24 | 2020-09-11 | 西安诺瓦星云科技股份有限公司 | Program synchronization management method and device |
CN114745582A (en) * | 2022-03-10 | 2022-07-12 | 冯志强 | Sound-light-electricity linkage control system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012013858A1 (en) * | 2010-07-30 | 2012-02-02 | Nokia Corporation | Method and apparatus for determining and equalizing one or more segments of a media track |
CN104754186A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Device control method |
CN104750059A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Light control method |
CN104754243A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Panoramic multichannel audio frequency control method based on variable domain acoustic image control |
CN104754242A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Variable rail sound image processing-based panoramic multi-channel audio control method |
CN104754178A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Voice frequency control method |
CN104750058A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Panoramic multichannel audio frequency control method |
CN104754244A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Full-scene multi-channel audio control method based on variable domain acoustic image performing effect |
-
2015
- 2015-12-31 CN CN201511030264.2A patent/CN106937023B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012013858A1 (en) * | 2010-07-30 | 2012-02-02 | Nokia Corporation | Method and apparatus for determining and equalizing one or more segments of a media track |
CN104754186A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Device control method |
CN104750059A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Light control method |
CN104754243A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Panoramic multichannel audio frequency control method based on variable domain acoustic image control |
CN104754242A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Variable rail sound image processing-based panoramic multi-channel audio control method |
CN104754178A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Voice frequency control method |
CN104750058A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Panoramic multichannel audio frequency control method |
CN104754244A (en) * | 2013-12-31 | 2015-07-01 | 广州励丰文化科技股份有限公司 | Full-scene multi-channel audio control method based on variable domain acoustic image performing effect |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111432259A (en) * | 2020-03-13 | 2020-07-17 | 阿特摩斯科技(深圳)有限公司 | Large-scale performance control system based on time code synchronization |
CN111432259B (en) * | 2020-03-13 | 2022-04-19 | 阿特摩斯科技(深圳)有限公司 | Large-scale performance control system based on time code synchronization |
CN111654737A (en) * | 2020-06-24 | 2020-09-11 | 西安诺瓦星云科技股份有限公司 | Program synchronization management method and device |
CN114745582A (en) * | 2022-03-10 | 2022-07-12 | 冯志强 | Sound-light-electricity linkage control system |
Also Published As
Publication number | Publication date |
---|---|
CN106937023B (en) | 2019-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106937022A (en) | Audio, video, light, mechanical multi-specialized collaborative editing and control method | |
CN106937021A (en) | Performance integrated control method based on many rail playback technologies of time shaft | |
CN104754178B (en) | audio control method | |
CN105122817B (en) | System and method for media distribution and management | |
CN108021714A (en) | A kind of integrated contribution editing system and contribution edit methods | |
US20130083036A1 (en) | Method of rendering a set of correlated events and computerized system thereof | |
CN104754186B (en) | Apparatus control method | |
CN106937023A (en) | Towards video display, the multi-specialized collaborative editing of stage and control method | |
JP7234935B2 (en) | Information processing device, information processing method and program | |
CN104750059B (en) | Lamp light control method | |
CN104750058B (en) | Panorama multi-channel audio control method | |
CN100418085C (en) | Information processing device and method | |
CN104750051B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN104754242B (en) | Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image | |
CN104754244B (en) | Panorama multi-channel audio control method based on variable domain audio-visual effects | |
CN104754243B (en) | Panorama multi-channel audio control method based on the control of variable domain acoustic image | |
CN104751869B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN106937205A (en) | Towards video display, the complicated sound effect method for controlling trajectory of stage | |
CN106937204A (en) | Panorama multichannel sound effect method for controlling trajectory | |
JP7503257B2 (en) | Content collection and distribution system | |
CN104750055B (en) | Based on the panorama multi-channel audio control method for becoming rail audio-visual effects | |
CN104754241B (en) | Panorama multi-channel audio control method based on variable domain acoustic image | |
CN106851331A (en) | Easily broadcast processing method and system | |
Sexton | Immersive audio: optimizing creative impact without increasing production costs | |
Jot et al. | Scene description model and rendering engine for interactive virtual acoustics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |