CN106937023B - multi-professional collaborative editing and control method for film, television and stage - Google Patents

multi-professional collaborative editing and control method for film, television and stage Download PDF

Info

Publication number
CN106937023B
CN106937023B CN201511030264.2A CN201511030264A CN106937023B CN 106937023 B CN106937023 B CN 106937023B CN 201511030264 A CN201511030264 A CN 201511030264A CN 106937023 B CN106937023 B CN 106937023B
Authority
CN
China
Prior art keywords
track
audio
sound
sound image
tracks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511030264.2A
Other languages
Chinese (zh)
Other versions
CN106937023A (en
Inventor
肖建敏
翁世峰
冯嘉明
陈智杰
何飞
白连东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Li Feng Creative Exhibition Co Ltd
Original Assignee
Shanghai Li Feng Creative Exhibition Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Li Feng Creative Exhibition Co Ltd filed Critical Shanghai Li Feng Creative Exhibition Co Ltd
Priority to CN201511030264.2A priority Critical patent/CN106937023B/en
Publication of CN106937023A publication Critical patent/CN106937023A/en
Application granted granted Critical
Publication of CN106937023B publication Critical patent/CN106937023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Abstract

The invention relates to a performance equipment control technology, in particular to a multi-professional collaborative editing and control method for films, televisions and stages. The method comprises the following steps: displaying a time axis on a display interface of the integrated console; adding and/or deleting tracks for controlling corresponding performance equipment, wherein the tracks comprise light tracks; editing track attributes; adding materials; editing material attributes; and the integrated console sends out corresponding control instructions according to the attributes of the tracks and the material attributes of the tracks. The invention can solve the technical problem of multi-professional playback editing and cooperative synchronous control of audio, video, light, mechanical and special-effect equipment of the current performance program.

Description

Multi-professional collaborative editing and control method for film, television and stage
Technical Field
The invention relates to a performance equipment control technology, in particular to a multi-professional collaborative editing and control method for films, televisions and stages.
Background
One of the more prominent problems in the programming of shows is the coordination and synchronization control among the various specialties (audio, video, lighting, mechanical, etc.). In a large performance, each specialty is relatively independent, and a relatively large team is needed to ensure smooth arrangement and performance of the performance. In the process of scheduling professional programs, much of the time is spent in coordinating and synchronizing the specialties and much more time is likely to be spent than actually focusing on the programs themselves.
Since each specialty is relatively independent, the control modes are quite different. If the audio and video of a site is to be edited synchronously, the video is controlled by the lamp console, the audio is controlled by multi-track playback editing, the audio is easily positioned to any time to start playback, but the video can only start from the beginning (the frame number can be manually adjusted to the corresponding position by an operator, but the video cannot be started along with the time code), and the control method has insufficient flexibility for site performance control.
In addition, after the positions of the sound boxes of the professional sound box system of the conventional movie and stage are fixed, the sound images are roughly set at the central position of the stage through the left and right channel main loudspeakers or the left, middle and right main loudspeakers on the two sides of the stage.
Therefore, it is a key technical problem to be solved urgently in the technical field to solve the problems of editing and synchronous control of the performance programs and flexible control of the sound image running effect at present.
Disclosure of Invention
the invention aims to provide a multi-professional collaborative editing and control method which can simplify multi-professional control of performances in occasions such as movies and stages and can flexibly and quickly set sound images of a sound amplification system.
in order to solve the technical problems, the invention adopts the technical scheme that: a multi-professional collaborative editing and control method comprises the following steps: displaying a time axis on a display interface of the integrated console; adding and/or deleting tracks for controlling corresponding performance equipment, wherein the tracks comprise one or more of audio tracks, video tracks, light tracks and device tracks; editing track attributes; adding materials; editing material attributes; and the integrated console sends out corresponding control instructions according to the attributes of the tracks and the material attributes of the tracks.
Compared with the prior art, the beneficial effects are as follows: the playback master control embodies the concept of 'performance integrated management'. From a technical point of view, the coupling of the several units is very low, they can work independently without mutual influence, and the only prominent connection is "time", i.e. when and what is played out. From the user's perspective, the relationship of "time" is what they are most interested in. If the states of the several units can be collectively viewed and managed, the user will be relieved of many unnecessary troubles. Such as coordinating synchronization issues among the various units, professional cross-referencing and contrast correction during program editing, etc.
Drawings
fig. 1 is a schematic diagram of a multi-professional collaborative editing and control method according to an embodiment.
fig. 2 is a method diagram of an audio control portion of the multi-professional collaborative editing and control method according to the embodiment.
fig. 3 is a schematic diagram illustrating an operation method of an audio sub-track of the multi-professional collaborative editing and control method according to the embodiment.
Fig. 4 is a method diagram of a video control portion of the multi-professional collaborative editing and control method according to the embodiment.
Fig. 5 is a schematic diagram of a method of a light control part of the multi-professional collaborative editing and control method according to the embodiment.
fig. 6 is a method diagram of a device control part of the multi-professional collaborative editing and control method according to the embodiment.
Fig. 7 is a schematic diagram of a multi-professional collaborative editing and control system according to an embodiment.
fig. 8 is a schematic diagram of an audio control module for multi-professional collaborative editing and control according to an embodiment.
Fig. 9 is a schematic diagram of a video control module for multi-professional collaborative editing and control according to an embodiment.
Fig. 10 is a schematic diagram of a multi-professional collaborative editing and control light control module according to an embodiment.
fig. 11 is a schematic diagram of a device control module for multi-professional collaborative editing and control according to an embodiment.
Fig. 12 is a schematic interface diagram of a multitrack playback editing module of the multi-professional collaborative editing and control method according to the embodiment.
Fig. 13 is a schematic diagram of the audio control portion of the multi-professional collaborative editing and control system according to the embodiment.
Fig. 14 is a schematic diagram of a trajectory matrix module of the multi-professional collaborative editing and control system according to the embodiment.
Fig. 15 is a schematic diagram of a video control portion of the multi-professional collaborative editing and control system according to the embodiment.
Fig. 16 is a schematic diagram of a light control portion of the multi-professional collaborative editing and control system according to the embodiment.
Fig. 17 is a schematic diagram of the device control section of the multi-professional collaborative editing and control system according to the embodiment.
Fig. 18 is a step diagram of a sound image trajectory control method of the embodiment.
fig. 19 is a schematic diagram of steps of a sound image trajectory data generating method according to the embodiment.
Fig. 20 is a schematic diagram of a distribution map and a variable-area sound image trajectory of the sound box according to the embodiment.
Fig. 21 is a second schematic diagram of the distribution map and the variable domain acoustic image track of the loudspeaker box according to the embodiment.
Detailed Description
The following describes the sound image trajectory control according to various embodiments of the present invention with reference to the drawings.
the embodiment provides a multi-professional collaborative editing and control method for movies and stages, which can simplify multi-professional control of performances in occasions such as movies and stages and can flexibly and quickly set sound images of a sound amplification system. The method realizes centralized arrangement and control of a plurality of professional materials through a multi-track playback editing module of an integrated console.
As shown in fig. 1, the film, television and stage oriented multi-professional collaborative editing and control method includes the following steps:
s101: displaying a time axis on a display interface of the integrated console;
S102: adding and/or deleting tracks for controlling corresponding performance equipment, wherein the tracks comprise one or more of audio tracks, video tracks, light tracks and device tracks;
S103: editing track attributes;
S104: adding materials;
S105: editing material attributes;
S106: and the integrated console sends out corresponding control instructions according to the attributes of the tracks and the material attributes of the tracks.
As shown in fig. 2 and fig. 12, the multi-professional collaborative editing and controlling method optionally includes multi-track audio playback control (corresponding to the following audio control modules), and at this time, the method includes the following steps:
S201: adding an audio track, and adding one or more audio tracks (areas) 1 and 2 which are parallel and aligned to the time axis on a display interface, wherein each audio track corresponds to an output channel.
S202: editing audio track attributes, wherein the audio track attributes comprise track locking and track muting. The track mute property can control whether audio materials and all sub-tracks on the track are muted or not, and the track mute property is the master control of the audio tracks. The track locking attribute can control individual attributes of the track except for muting and adding hidden sub-tracks, and other attributes and material positions and material attributes in the audio track cannot be modified.
s203: adding audio material, adding one or more audio materials 111, 112, 113, 211, 212, 213, 214 in the audio tracks 1, 2, and generating audio material corresponding to the audio material in the audio tracks, wherein the length of the audio tracks occupied by the audio material is matched with the total duration of the audio material. Before adding the audio materials, an audio material list is obtained from an audio server, and then the audio materials are selected from the audio material list to be added into an audio track. After the audio materials are added into the audio track, the audio attribute files corresponding to the audio materials are generated, the integrated console controls the instruction sent to the audio server by editing the audio attribute files instead of directly calling or editing the source files corresponding to the audio materials, and therefore the safety of the source files and the stability of the integrated console are guaranteed.
S204: and editing the audio material attributes, wherein the audio material attributes comprise a starting point position, an end point position, a starting time, an ending time, a total duration and a playing time length. The starting point is a time axis time corresponding to the starting point (in the vertical direction) of the audio material, the ending point is a time axis time corresponding to the ending point (in the vertical direction) of the audio material, the starting time is an actual playing starting time of the audio material on a time axis, and the ending time is an actual playing ending position of the audio material on the time axis. Generally, the start time may be delayed from the start position and the end time may be advanced from the end position. The total duration refers to the original time length of the audio material, the time difference from the starting position to the end position is the total duration of the audio material, the playing time length refers to the playing time length of the audio material on a time axis, and the time difference between the starting time and the ending time is the playing time length of the audio material. The cutting operation of the sound image material can be realized by adjusting the starting time and the ending time, namely, only the part which is expected to be played by the user is played.
The start position and the end position can be changed by adjusting (laterally moving) the position of the audio material in the audio track, but the relative positions of the start position and the end position on the time axis do not change, that is, the length of the audio material does not change. The actual playing time of the audio material and the length thereof on the time axis can be changed by adjusting the start time and the end time of the audio material. A plurality of audio materials may be placed in an audio track, indicating that the plurality of audio materials may be played sequentially (via the corresponding output channels) during the time period indicated by the time axis. It should be noted that the audio material positions (temporal positions) within any audio track can be freely adjusted, but there should be no overlap between the audio materials.
Furthermore, the integrated console only controls the attribute files corresponding to the audio materials, so that the integrated console can also perform cutting operation and splicing operation on the audio materials. The cutting operation means that one audio material on the audio track is divided into a plurality of audio materials, each divided audio material has a corresponding attribute file, the source file is still intact, and the integrated console sends out control commands according to the new attribute files to sequentially call the source file to perform corresponding playing and sound effect operation. Similarly, the splicing operation refers to combining two audio materials into one audio material, combining their respective attribute files into one attribute file, and sending out a control audio server to call the two audio source files through one attribute file.
Furthermore, a plurality of groups of operation entity operation keys respectively corresponding to the audio tracks can be arranged on the integrated console, so that the attributes of the audio materials can be manually adjusted through the entity operation keys. For example, a material playback time adjustment knob that adjusts the position (time axis position) of the audio material in the audio track back and forth is added.
S205: adding audio sub-tracks 12, 13, 14, 15, 21, 22, adding one or more audio sub-tracks corresponding to one of said audio tracks, each of said audio sub-tracks being parallel to said time axis, said audio sub-tracks corresponding to the output channels of its corresponding said audio track.
each audio track may have an attached audio sub-track, the types of audio sub-tracks including a sound image sub-track and a sound effect sub-track. The sound image sub-track is used for carrying out sound image track processing on part or all of audio materials of the audio track, and the sound effect sub-track is used for carrying out sound effect processing on part or all of audio materials of the audio track. In this step, the following steps may be further performed:
S301: adding a sound image sub-track and sound image materials, adding one or more sound image materials 121, 122 in the sound image sub-track, and generating the sound image materials corresponding to the sound image materials in the sound image sub-track, wherein the length of the sound image sub-track occupied by the sound image materials is matched with the total time length corresponding to the sound image materials.
s302: editing the audio image sub-track attributes, wherein the editable audio image sub-track attributes comprise track locking and track muting.
S303: editing the audio and video material attributes, wherein the audio and video material attributes also comprise a starting point position, an end point position, a starting time, an ending time, a total duration and a playing time length, which are similar to the audio material.
Through the sound image material on the sound image sub-track, the sound image track processing can be carried out on the signal output by the output channel corresponding to the audio track to which the sound image sub-track belongs in the time period between the starting time and the ending time of the sound image material. Therefore, different types of sound image materials are added on the sound image sub-track, and different types of sound image track processing can be carried out on the signals output by the corresponding output channels; and by adjusting the starting position, the ending time, the starting time, and the ending time of each sound image material, the time at which the sound image trajectory processing starts and the time during which the sound image trajectory effect lasts can be adjusted.
Sound image material is distinguished from audio material in that audio material represents audio data. The sound image trajectory data is output level data of each speaker node changing with time in order to make a sound image formed by the output levels of each virtual speaker node in the speaker distribution map run or remain unchanged along a preset path in a time period with a set length. That is, the sound image trajectory data includes output level change data of all the speaker nodes in the speaker distribution map within the set length time period. The type of the sound image track data comprises fixed-point sound image track data, variable-track sound image track data and variable-domain sound image track data, the type of the sound image track data determines the type of a sound image material, and the total time length of sound image motion corresponding to the sound image track data determines the time difference between the starting position and the ending position of the sound image material, namely the total time length of the sound image material. The sound image track processing means that the size of the actual output level of each sound box entity corresponding to each sound box node is adjusted according to the sound image track data, so that the sound image of the sound box entity system runs or keeps still along a set path within a time period of a set length.
S304: sound effect sub-tracks are added, the types of sound effect sub-tracks including volume and gain sub-tracks 13, 22, EQ sub-tracks 14, 15, 21, one volume and gain sub-track for each audio track, and one or more EQ sub-tracks. The volume and gain sub-tracks are used for adjusting the signal level of the output channel corresponding to the audio track, and the EQ sub-tracks are used for carrying out EQ sound effect processing on the output signals of the output channel corresponding to the audio track.
S305: editing the attribute of the sound effect sub-track, wherein the attribute of the sound effect sub-track comprises a track locking, a track muting and a track identification, and sound effect processing parameters corresponding to the type of the sound effect sub-track. For example, the sound effect processing parameter included in the volume and gain sub-track is an output level adjustment parameter, and the sound effect processing parameter included in the EQ sub-track is an EQ processing parameter. The sound effect of the corresponding output channel of the audio track to which the sound effect sub-track belongs can be adjusted by modifying the sound effect parameters of the sound effect sub-track.
S206: and storing data, or generating a control instruction for a source file corresponding to the audio material according to the attributes of the audio track and the sub-tracks thereof, the audio material and the attributes of the acoustic pixel material, and performing play control and acoustic image and sound effect processing control on the source file of the audio material according to the control instruction.
The control instruction comprises an audio source file for determining whether to call (play) the audio material, the starting time and the ending time (based on the time of a time axis) of the playing of the source file, and the audio and sound effect processing of the source file, and the specific control instruction corresponds to the attributes of each audio track and the auxiliary sub-tracks thereof, and the attributes of the audio material and the audio and video material. That is to say, the audio track does not directly call and process the source file of the audio material, but only processes the attribute file corresponding to the audio source file, and realizes indirect control of the audio source file by editing and adjusting the attribute file of the source file, adding/editing the audio image material, and the attributes of the audio track and the sub-tracks thereof.
For example, audio material added to an audio track will enter a playlist, and when the audio track starts playing, the audio material will be played; by editing the audio track attributes, the mute attributes of the audio track can be controlled, whether the audio track and the subsidiary sub-tracks thereof are muted (effective) or not can be controlled, and by editing the audio lock attributes, the audio lock attributes can be controlled, and besides individual attributes such as muting and adding hidden sub-tracks, other attributes, material positions and material attributes in the audio track can not be modified (locked state). Reference is made to the preceding description for more details.
As shown in fig. 4 and fig. 12, the multi-specialty collaborative editing and control method of the present embodiment may further select to add video playback control (corresponding to the following video control modules), and specifically includes the following steps:
s401: and adding a video track (4) (area) which is parallel and aligned to the time axis and corresponds to a controlled device (on a display interface), wherein a video server is adopted in the invention.
s402: editing video track attributes, wherein the editable video track attributes comprise track locking and track muting. The video track properties are similar to the audio track properties.
S403: adding video material, adding one or more video material 41, 42, 43, 44 in a video track, and generating video material corresponding to the video material in the video track, wherein the length of the video track occupied by the video material is matched with the total time length of the video material. Before adding the video material, a video material list is obtained from a video server, and then the video material is selected from the video material list to be added into a video track. After the video material is added into the video track, the video attribute file corresponding to the video material is generated, the integrated console controls the instruction sent to the video server by editing the video attribute file instead of directly calling or editing the source file corresponding to the video material, and the safety of the source file and the stability of the integrated console are ensured.
S404: and editing the video material attributes, wherein the video material attributes comprise a starting point position, an end point position, a starting time, an ending time, a total duration and a playing time length. The video material attribute is similar to the audio material attribute, and meanwhile, the audio material can also be subjected to transverse movement, cutting and splicing operations, or a group of entity operation keys corresponding to the video track are added on the integrated console for adjusting, so that the attribute of the video material is manually adjusted through the entity operation keys.
S405: and storing the data, or generating a control instruction for the source file corresponding to the video material according to the video track attribute and the attribute of the video material, and performing play control and audio and video and sound effect processing control on the source file of the video material according to the control instruction. Similar to the video track, the specific control commands correspond to the attributes of the audio track and the attributes of the video material.
as shown in fig. 5 and fig. 12, the multi-specialty collaborative editing and control method of the present embodiment may further select to add lighting control (corresponding to the following lighting control modules), and specifically includes the following steps:
S501: adding a light track (on a display interface), adding a light track 3 (area) which is parallel to and aligned with the time axis, wherein the light track corresponds to a controlled device, and adopting a light network signal adapter (such as an Artnet network card) in the invention.
S502: and editing the attributes of the light track, wherein the editable attributes of the light track comprise track locking and track muting. The light track attributes are similar to the audio track attributes.
S503: adding light materials, adding one or more light materials 31, 32 and 33 in the light track, generating the light materials corresponding to the light materials in the light track, wherein the length of the light track occupied by the light materials is matched with the total duration of the light materials. Similar to the audio material and the video material, the light track is not loaded with the light material, but only generates the attribute file corresponding to the light material source file, and sends out the control instruction through the attribute file to control the output of the light material source file.
The light material is light network control data with a certain time length, such as Artnet data, and the Artnet data is packaged with DMX data. The light materials can be generated in the following way: after the traditional lamp console has arranged the light program, the integrated console is connected to the light network interface on the traditional lamp console through the light network interface, records the light control signal output by the lamp console, and meanwhile, the integrated console needs to print a time code on the recorded light control signal in the recording process so as to carry out editing control on the light track.
s504: and editing the attributes of the light materials, wherein the attributes of the light materials comprise a starting point position, a finishing point position, a starting time, an ending time, a total duration and a playing time length. The attributes of the light materials are similar to those of the audio materials, and meanwhile, the audio materials can also be transversely moved, cut off and spliced, or a group of entity operation keys corresponding to the light tracks are added on the integrated console to be adjusted, so that the attributes of the light materials are manually adjusted through the entity operation keys.
S505: and storing the data, or generating a control instruction for the source file corresponding to the light material according to the light track attribute and the attribute of the light material, and performing play control and audio-video and sound effect processing control on the source file of the video material according to the control instruction. Similar to the video track, the specific control commands correspond to the attributes of the audio track and the attributes of the video material.
As shown in fig. 6 and fig. 12, the multi-specialty collaborative editing and control method of the present embodiment may further select to add device control (corresponding to the following device control modules), and specifically includes the following steps:
S601: add device track add (on display interface) one or more device tracks 5 (regions) parallel to the timeline, each of which corresponds to a controlled device, e.g., a mechanical device. It is necessary to confirm that the controlled device has established a connection with the integrated console before adding the device track. The integrated console and the controlled devices can establish connection through TCP, for example, the integrated console is set as a TCP server, each controlled device is set as a TCP client, and the TCP client of the controlled device is actively connected to the TCP server of the integrated console after accessing the network.
S602: and editing device track attributes, wherein the editable device track attributes comprise track locking and track muting. The device track attributes are similar to the audio track attributes, and if the device track chooses to mute, then all of the attached control sub-tracks of the device track do not perform any operations.
S603: and adding a control sub-track, wherein one or more control sub-tracks corresponding to one of the device tracks are added, each control sub-track is parallel to (and corresponds to) the time axis, and each control sub-track corresponds to the controlled equipment corresponding to the device track corresponding to the control sub-track.
S604: adding control materials, adding corresponding types of control materials according to the types of the control sub-tracks, and generating corresponding control sub-materials on the added control sub-tracks, wherein the length of the control sub-tracks occupied by the control materials is matched with the total duration of the control materials.
The types of the control sub-track include TTL control sub-track, relay control sub-track, network control sub-track, accordingly, the control material that can be added to the TTL control sub-track includes TTL materials 511, 512, 513 (e.g., TTL high level control material, TTL low level control material), the control material that can be added to the relay sub-track includes relay materials 521, 522, 523, 524 (e.g., relay on control material, relay off control material), and the control material that can be added to the network control sub-track includes network materials 501, 502, 503 (e.g., TCP/IP communication control material, UDP communication control material, 232 communication control material, 485 protocol communication control material, etc.). By adding the corresponding control sub-elements, the corresponding control instructions can be sent out, and the control sub-elements are actually the control instructions.
S605: and editing the attributes of the control sub-materials, wherein the attributes comprise a starting point position, an end point position and a total duration. The start position and the end position can be changed by adjusting (laterally moving) the positions of the control sub-materials in the respective control sub-tracks, but the relative positions of the start position and the end position on the time axis do not change, that is, the length of the audio material does not change. The starting position of the control material is the time axis moment when the control instruction corresponding to the control material is started to be sent to the corresponding controlled device, and the ending position of the control material is the time axis moment when the control instruction is stopped to be sent.
Furthermore, an association relationship may be set between the control materials in the same control sub-track, so that the control command corresponding to the control material with the earlier time axis corresponding to the starting position is not executed successfully, and then the control command corresponding to the associated control material with the later time axis corresponding to the starting position (integrated console) will not be issued (or executed) (controlled device), such as the opening and closing and lifting control of the screen.
furthermore, a certain time period of protection time can be set before and after the control material of the control sub-track, that is, the control material cannot be added or the control command cannot be sent in the protection time of the control sub-track.
s606: and storing the data, or controlling the attribute of the material to generate a control instruction according to the attributes of the control track and the control sub-track thereof, and transmitting the control instruction to the corresponding controlled device.
In addition, the present embodiment also provides a multi-professional system editing and control system (performance integrated control system), as shown in fig. 7, the system includes an integrated console 70, and optionally includes an audio server 76, a video server 77, a light control module 78 and a device control module 79. The integrated console 70 includes a multi-track editing playback module 71, and the multi-track editing playback module 71 can execute one or more of audio control, video control, light control, and device control in the performance integrated control method, and specific implementation steps are not described herein again. The multitrack playback editing control module includes an audio control module 72, and optionally a video control module 73, a light control module 74, and a device control module 75.
As shown in fig. 8, the audio control module 72 includes an audio track adding module 81, an audio track attribute editing module 82, an audio material adding module 83, an audio material attribute editing module 84, an audio sub-track adding module 85, and a data retention/audio output control instruction module 86, and the functions implemented by these modules correspond to the foregoing steps S201 to S206 one by one, which are not described herein again, and the same applies hereinafter.
Further, the audio playing control principle of the multi-professional system editing and control system is shown in fig. 13, the integrated control further includes a fast playback editing module, a physical input module, and a multi-track playback editing module, the fast playback editing module is configured to edit an audio material in real time and send a corresponding control instruction to the audio server 76 to play a source file corresponding to the audio material, and the physical input module corresponds to a physical operation key on the integrated console 70 and is configured to perform real-time tuning control on a sound source externally input into the integrated console.
Correspondingly, the audio server is provided with a mixing matrix module, a track matrix module, a 3x1 output mixing module and a physical output module, the mixing matrix module can receive audio signals output by audio source files in the audio server called by the fast playback editing module and the multi-track playback editing module in a control command form, and audio signals output by the physical input module, and similarly, the track matrix module can also receive the audio inputs. The audio mixing matrix is used for outputting to the output audio mixing module after carrying out audio mixing processing on each path of audio input, and the track matrix module is used for outputting to the output audio mixing module after carrying out audio and video track processing on each path of audio input. The output sound mixing module can receive audio output from the sound mixing matrix module, the track matrix module and the physical input module, and the audio output is output through each physical output interface of the physical output module after being subjected to 3x1 sound mixing processing. The sound image track processing means that the level output to each sound box entity is adjusted according to the sound image track data, so that the sound image of the sound box entity system runs or keeps still along a set path within a time period of a set length.
in this embodiment, the source file of the audio material is stored in an audio server outside the integrated console, the multitrack playback editing module does not directly call and process the source file of the audio material, but only processes the attribute file corresponding to the audio source file, and realizes indirect control over the audio source file by editing and adjusting the attribute file of the source file, adding/editing the audio image material, and the attributes of the audio track and its sub-track, so that the output channel corresponding to each audio track outputs only the control signal/instruction for the audio source file, and then executes various processing of the audio source file by the audio server receiving the control instruction.
As shown in fig. 14, the multitrack playback editing module receives an effective audio material list from the audio server 76, and does not directly process an audio source file, where the audio source file is stored in the audio server, and after receiving a corresponding control command, calls the audio source file to perform various sound effect processing, for example, entering a mixing matrix module to perform mixing processing, and entering a trajectory matrix module to perform trajectory processing. The audio-visual material, which is also actually a control command, may be stored on the integrated console 70 or uploaded to the audio server.
As shown in fig. 9, the video control module 73 includes a video track adding module 91, a video track attribute editing module 92, a video material adding module 93, a video material attribute editing module 94, and a data retention/output video control instruction module 95, and functions implemented by these modules correspond to the foregoing steps S401 to 405 one to one, respectively.
further, a video editing and playing control principle of the multi-professional system editing and controlling system is shown in fig. 15, the integrated console does not directly execute a source file of a video material, but sends a control instruction to the video server by acquiring a video material list and a corresponding attribute file, and the video server then executes playing and effect operations on the source file of the video material according to the control instruction.
As shown in fig. 10, the light control module 74 includes a light track adding module 110, a light track property editing module 120, a light material adding module 130, a light material property editing module 140, and a data retention/light output control instruction module 150, and functions implemented by these modules correspond to the foregoing steps S501 to S505 one-to-one respectively.
further, the light control principle of the multi-professional system editing and control system is shown in fig. 16, the integrated console is further provided with a light signal recording module for recording light control signals output by the light console, and time codes are printed on the recorded light control signals in the recording process so as to perform editing control on the light track.
As shown in fig. 11, the device control module 75 includes a device track adding module 151, a device track attribute editing module 152, a control sub-track adding module 153, a control material adding module 154, a control material attribute editing module 155, and a data retention/light output control instruction module 156, and the functions implemented by these modules correspond to the foregoing steps S601 to S606 one to one, respectively.
further, a device control principle of the multi-professional system editing and control system is shown in fig. 17, and various device control signals output by the integrated console are output to corresponding controlled equipment through various protocol interfaces on the device adapter.
In addition, the integrated console may further include an acoustic image track data generation module for creating (generating) acoustic image track data (i.e., acoustic image material), and the acoustic image track data obtained by the module may be called by the multi-track playback editing execution module, so as to control the audio server track matrix module to control the acoustic image track. Further, the present embodiment provides a sound image trajectory control method, where the control host (e.g., an integrated console, an audio server) sets output level values of each speaker node of an entity speaker system, so that a sound image moves or is stationary in a set manner within a set total duration. As shown in fig. 18, the control method includes:
S181, generating acoustic image trajectory data;
S182, adjusting the output level of each sound box entity according to the sound image track data in the total time length corresponding to the sound image track data;
And S183, overlapping the input level of the signal input to each sound box entity and the output level of the corresponding sound box entity in the total time length to obtain the actual output level of each sound box entity.
the sound image trajectory data is output level data of sound box nodes changing with time in order to make sound images formed by output levels of the virtual sound box nodes in a virtual sound box distribution map on the integrated console run or keep still along a preset path in a time period with a set length (namely, the total duration of sound images). That is, the sound image trajectory data includes output level change data of all the speaker nodes in the speaker distribution map within the set length time period. For each sound box node, the output level of the sound box node changes along with the change of time in the set time period, and the output level may also be zero, a negative number or even negative infinity, and the negative infinity is preferably adopted.
Each loudspeaker node corresponds to a loudspeaker entity in the physical loudspeaker system, and each loudspeaker entity comprises one or more loudspeakers at the same position. That is, each speaker node may correspond to one or more speakers located at the same location. In order to make the physical sound box system reproduce the sound image track route more accurately, the position distribution of each sound virtual sound box node in the sound box distribution map corresponds to the position distribution of each sound box entity of the physical sound box system, especially the relative position relationship between each sound box node corresponds to the relative position relationship between each sound box entity.
The level actually output by the sound box entity is obtained by superposing the level of the input signal and the output level of the sound box node corresponding to the sound box entity in the sound image track data. The former is the characteristics of the input signal, and the latter can be regarded as the characteristics of the sound box entity. At any one time, different input signals have different input levels, and for the same loudspeaker entity, there is only one output level. It can be understood that the sound image trajectory processing is an output level processing for each sound box entity to form a preset sound image trajectory effect (including sound image standstill).
The superposition of the input level and the output level of the sound box entity can be processed before the audio signal is actually input into the sound box entity, or can be processed after the audio signal enters the sound box entity, which depends on the link structure of the whole sound amplifying system, and whether an audio signal processing module, such as a DSP unit, is arranged in the sound box entity.
Types of sound image trajectory data include: fixed point sound image data, variable orbit sound image trajectory data, and variable domain sound image trajectory. In order to facilitate the control of the speed and the process of sound images when the sound image track data are generated on the integrated console in an analog mode, the running path of the sound images is represented by the line segments sequentially connected among the plurality of sound image track control points which are distributed on the sound box distribution map, namely the running path of the sound images and the total running time of the sound images are determined by the plurality of sound image track control points which are distributed in a discrete mode.
The fixed point sound image refers to the situation that one or more selected sound box nodes in a sound box distribution map continuously output the level within a set length of time period, and the output level values of unselected sound box nodes are zero or negative infinity. Accordingly, the fixed point sound image data is output level data in which, during a time period of a set length, one or more selected speaker nodes in the speaker distribution map continuously output levels, and unselected speaker nodes do not output levels, or when the output level value is zero or minus infinity, each speaker node changes with time. For a selected speaker node, its output level is continuous (and may fluctuate up and down) during the set time; and for unselected enclosure nodes, its output level remains negative infinity for the set time.
The variable rail sound image is a situation that in a time period with a set length, in order to enable the sound image to run along a preset path, each sound box node outputs a level according to a certain rule. Accordingly, the variable rail sound image trajectory data is output level data of each sound box node changing with time in order to make the sound image run along a preset path in a time period with a set length. The running path of the sound image is not required to be very precise, and the duration of the sound image movement (running) is not very long, only the sound image running effect recognizable by the listener needs to be approximately created.
The variable domain sound image is a situation that in a time period with a set length, in order to make the sound image run along a preset region, the output level of each sound box node is changed according to a certain rule. Accordingly, the variable-range sound image trajectory data is output level data of each sound box node changing with time in order to make the sound image run along the preset range for a set length of time.
As shown in fig. 19, the variable domain sound image trajectory data of the present embodiment can be obtained by:
s1901: setting a sound box node: and adding or deleting sound box nodes on the sound box distribution map 10.
S1902: and modifying the node attribute of the sound box: the attributes of the speaker nodes include speaker coordinates, speaker type, corresponding output channel, initialization level, speaker name, etc. The sound box nodes are represented by sound box icons on the sound box distribution map, and the coordinate positions of the sound box nodes can be changed by moving the sound box icons. The sound box type refers to a full-frequency sound box or an ultra-low frequency sound box, and the specific type can be divided according to actual needs. Each sound box node in the sound box distribution map is distributed with an output channel, each output channel corresponds to one sound box entity in the entity sound box system, and each sound box entity comprises one or more sound boxes located at the same position. That is, each speaker node may correspond to one or more speakers located at the same location. In order to reproduce the designed sound image running path in the sound box distribution map, the position distribution of the sound box entity should correspond to the position distribution of the sound box nodes in the sound box distribution map.
S1903: dividing the acoustic image area and setting an acoustic image running path:
a certain point is selected on the sound box distribution map as a center S0, sound box nodes are added at the center S0, then a plurality of concentric circle regions are divided by taking the center S0 as a circle center, and the concentric circle with the largest diameter can completely or partially cover the sound box nodes on the sound box distribution map. The region surrounded by the concentric circle with the smallest diameter is set as the acoustic image region Z1, the regions between adjacent concentric circles are set as the acoustic image regions Z2, Z3, Z4 … ZN (N is a natural number equal to or greater than 2), respectively, from the inside to the outside, that is, the region between the concentric circle with the smallest diameter and the concentric circle with the second to last smaller diameter is set as the acoustic image region Z2, the region between the concentric circle with the second to last smaller diameter and the concentric circle with the third to last smaller diameter is set as the acoustic image region Z3, and so on (see fig. 20).
A plurality of running paths radially spreading outward with the center S0 as a starting point are set, each running path being set as a straight line segment traversing each sound image region. These running paths may cover all or part of the nodes of the enclosures in the concentric circular region, preferably all of the nodes of the enclosures in the concentric circular region.
the end points of these running paths are the nodes of the sound boxes within or outside the concentric circle region, the start point of the running path is the center S0, and the end points are different according to the distribution of the nodes of the sound boxes in the running path direction: (1) if there is a sound box node located outside the concentric circle with the largest diameter in the running path direction, the end point of the running path is the sound box node located outside the concentric circle with the largest diameter and having the smallest distance from the concentric circle with the largest diameter in the running path direction. (2) If there is no speaker node outside the largest diameter concentric circle in the running path direction, the end point of the running path is the speaker node farthest from the center S0 in the running path direction.
These running paths have at least one speaker node, and may have two or more speaker nodes.
Referring to fig. 20, the sound box distribution map is provided with four concentric circles centering on S0, the four concentric circles are divided into 4 sound image regions Z1, Z2, Z3, and Z4, 13 sound image running paths are set, which are S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12, and S13, and all sound box nodes in the concentric circle region (the region included by the concentric circle with the largest diameter) have corresponding running paths. The running paths S2, S3, S5, S8, S9, S10, S11, S12, and S13 pass through only one speaker node, and the running paths S1, S4, and S7 pass through two speaker nodes.
Referring to fig. 21, further, when a certain point is selected as the center, a certain speaker node may be directly selected as the center. The sound box distribution map of fig. 21 is provided with 5 concentric circles centered on the sound box node S0 ', and divided into 5 sound image regions Z1', Z2 ', Z3', Z4 ', Z5', and a plurality of sound image running paths. The sound image running paths S1 ', S2 ' and S3 ' have 3 sound box nodes, respectively, and at the current time, the sound image track points P41, P42 and P43 on the sound image running paths S1 ', S2 ' and S3 ' are all located in the sound image area Z4 '.
S1904: editing the time attribute of the sound image area, including the time corresponding to the sound image area, the time from the current sound image area to the next sound image area, and the total running time of the sound image. Editing of the acoustic image region attribute is similar to editing of the variable rail acoustic image track point attribute. If the time corresponding to a sound image region is modified, the time corresponding to all the sound image regions before the sound region and the total time length of the sound image running need to be adjusted. If the time required from one sound image area to the next sound image area is adjusted, the time corresponding to the next sound image area and the total running time of the sound images need to be adjusted. If the total time length of the sound image running is modified, the corresponding time of each sound image area on the sound image running path and the time required by the sound image area to the next sound image area are adjusted.
S1905: recording variable domain sound image track data, and recording output level values of all the time points of all the sound box nodes in the process that sound images sequentially run through all the sound image regions along a set running path.
The output level value of the relevant sound box node at a certain moment can be calculated by the following method.
Suppose that: the total time length of the sound image running is set as T, the sound image track point on each running path moves from the starting point to the end point or from the end point to the starting point in the total time length T, and the sound image moving speeds of the running paths can be the same or different.
At a certain time t, the current sound image trajectory point Pj on a certain sound image running path P moves to the next sound image zone Zi +1 along a certain sound image zone Zi. On the route of the sound image running path P, the output levels of the adjacent speaker nodes k and the speaker nodes k +1 on the inner and outer sides of the sound image track point Pj are dBm and dBm +1, respectively, and the output levels of the speaker nodes other than the two speaker nodes on the sound image running path P are zero or negative infinity. The speaker node K and the speaker node K +1 are both speaker nodes located on the sound image running path, where the speaker node K is a speaker node located inside the sound image track point Pj (a side close to the center S0), and the speaker node K +1 is a speaker node located outside the sound image track (a side away from the center S0).
Then, at the present time t, in the course of the sound image trajectory point Pj on the sound image running path P running from the present sound image region Zi to the next sound image region Zi +1,
Output level dBm of sound box node k is 10 · loge η ÷ 2.3025851
Output level dBm +1 of the sound box node k +1 ═ 10 · loge β ÷ 2.3025851
L12 is the distance from the sound box node k to the sound box node k +1, l1P is the distance from the sound box node k to the current sound image track point Pj, and lp2 is the distance from the current sound image track point Pj to the sound box node k + 1. It can be seen from the above formula that each sound image track point on each running path has two speaker node output levels, but when the sound image track point is just located at a speaker node, only one of the speaker nodes outputs a level.
referring to fig. 20, the sound box distribution map is provided with 4 concentric circles with the center S0 as the center, and the center S0 is provided with sound box nodes. The concentric circles are divided into 4 sound image areas Z1, Z2, Z3 and Z4, and 13 sound image running tracks S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12 and S13 are provided, wherein the sound image running paths S1, S4, S7 and S10 respectively have 3 sound box nodes, and the rest sound image running paths respectively have only 2 sound box nodes.
At the current time, the current sound image trajectory points P31, P32, P33, P34 of the sound image running paths S1, S2, S3, S4 run into the sound image region Z4. If the sound image moving speed on each sound image running path is the same, these current sound image track points are located on a circle with S0 as the center. At this time, on each audio-video running path, only the audio amplifier nodes on the inner side and the outer side of the current audio-video track point have output levels, and the output levels of the other audio amplifier nodes are zero or negative infinity. Taking the current sound image trajectory P31 as an example, the inside adjacent speaker node is the speaker node located at the upper right in the sound image region Z2 in fig. 20 (the speaker node in fig. 20 is indicated by a small circle), the outside ringing speaker node is the speaker node located at the upper right outside the sound image region Z4, and this speaker node is also the end point of the sound image running path S1. At this time, only the aforementioned two cabinet nodes on the sound image running path S1 have level outputs, and the output level of the cabinet node located at the center S0 is zero or negative infinity.
When the output level values of the sound box nodes at each moment of the variable domain sound image track in the total time length T are recorded, the output level values can be continuously recorded or can be recorded according to a certain frequency. The latter means that the output level value of each sound box node is recorded once every certain time. In the present embodiment, the output level values of the respective speaker nodes when the sound image runs along the set path are recorded with a frequency of 25 frames/second or 30 frames/second. The output level data of each sound box node is recorded according to a certain frequency, so that the data volume can be reduced, the processing speed of the input audio signal during audio-video track processing is increased, and the real-time performance of the audio-video running effect is ensured.

Claims (10)

1. a multi-professional collaborative editing and control method for movies and stages is characterized by comprising the following steps:
Displaying a time axis on a display interface of the integrated console;
adding and/or deleting tracks for controlling corresponding performance equipment, wherein the tracks comprise one or more of audio tracks, video tracks, light tracks and device tracks;
Editing track attributes;
adding materials;
Editing material attributes;
The integrated console sends out corresponding control instructions according to the attributes of the tracks and the material attributes of the tracks;
the control method further comprises the following steps:
Generating variable domain acoustic image trajectory data; the variable domain sound image trajectory data is obtained by the following method:
Setting a sound box node: adding or deleting sound box nodes on a sound box distribution map;
And modifying the node attribute of the sound box: the attributes of the sound box nodes comprise sound box coordinates, sound box types, corresponding output channels and initialization levels;
Dividing an acoustic image region and setting an acoustic image running path;
editing the time attribute of the sound image region, wherein the time attribute comprises the time corresponding to the sound image region, the time required from the current sound image region to the next sound image region and the total running time of the sound image;
recording variable domain sound image track data, and recording output level values of all the time points of all the sound box nodes in the process that sound images sequentially run through all the sound image regions along a set running path; when dividing an acoustic image region and setting an acoustic image running path:
distributed in sound boxesSelecting a certain point on the map as center S0, adding sound box nodes at center S0, dividing several concentric circle regions with the center S0 as the center, covering the sound box nodes on the sound box distribution map, and setting the region surrounded by the concentric circle with the minimum diameter as sound image region Z1the regions between adjacent concentric circles are set as sound image regions Z from the inside to the outside, respectively2、Z3、Z4…ZNN is a natural number greater than or equal to 2;
setting a plurality of running paths which radiate outwards and take the center S0 as a starting point, wherein each running path is set as a straight line segment traversing each sound image area, and the running paths can completely or partially cover all or part of sound box nodes in the concentric circle area;
In the process of recording variable-domain acoustic image track data, acoustic image track points on each running path move from a starting point to an end point of the acoustic image track points in the total acoustic image running time T, or move from the end point to the starting point, and the acoustic image moving speeds of the running paths can be the same or different;
suppose that at a certain time t, a current sound image trajectory point Pj on a certain sound image running path P follows a certain sound image region ZiDownward sound image region Zi+1Moving, on the route of the sound image running path P, the output levels of the adjacent sound box nodes K and the sound box nodes K +1 at the inner side and the outer side of the sound image track point Pj are dB respectivelym、dBm+1The output level of the sound box nodes except the two sound box nodes on the sound image running path P is zero or minus infinity, and the sound box node K +1 are the sound box nodes positioned on the sound image running path; the sound box node K is a sound box node located on the inner side of the sound image track point Pj, and the sound box node K +1 is a sound box node located on the outer side of the sound image track;
Then, at the current certain time t, the sound image track point Pj on the sound image running path P is from the current sound image region ZiRun to next acoustic image zone Zi+1In the course of (a) or (b),
Output level dB of sound box node Km=10·logeη÷2.3025851;
Output level dB of sound box node K +1m+1=10·logeβ÷2.3025851;
wherein l12Is the distance, l, from the speaker node K to the speaker node K +11Pis the distance l from the sound box node K to the current sound image track point Pjp2And the distance from the current sound image track point Pj to the sound box node K + 1.
2. A movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 1, wherein during editing the track attributes, editable attributes include track lock and track mute, the track mute attribute is used to control whether the track is in effect, and the track lock is used to lock the track.
3. a multi-professional collaborative editing and controlling method for movie and stage oriented according to claim 1, wherein when material is added, material can be added to the selected track, and material icons corresponding to the material are generated in the track, and the length of the track occupied by the material icons matches with the total duration of the material.
4. a movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 3, wherein material attributes including a start position, an end position, a start time, an end time, a total duration, a play time length can be edited when editing the material attributes.
5. A multi-professional collaborative editing and control method for movies and stages as claimed in claim 2, wherein the material is a lighting material, and the generation method of the lighting material is as follows: the integrated console records the light control signals output by the light console, and time codes are printed on the recorded light control signals in the recording process to form the light materials, wherein the total duration of the light materials is the difference value between the time codes when the signals begin to be recorded and the time codes when the signals stop being recorded.
6. A movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 1, wherein when adding and/or deleting audio tracks, the method comprises:
Adding an audio track, and adding one or more audio tracks which are parallel and aligned to the time axis on a display interface, wherein each audio track corresponds to an output channel;
editing the audio track attribute;
Adding audio materials, adding one or more audio materials in an audio track, and generating audio material icons corresponding to the audio materials in the audio track, wherein the length of the audio track occupied by the audio material icons is matched with the total duration of the audio materials;
And editing the audio material attributes, wherein the audio material attributes comprise a starting point position, an end point position, a starting time, an ending time, a total duration and a playing time length.
7. A film, television and stage oriented multi-professional collaborative editing and control method according to claim 6, wherein the method further comprises:
Adding audio sub-tracks, and adding one or more audio sub-tracks corresponding to one of the audio tracks, wherein each audio sub-track is parallel to the time axis, the audio sub-tracks correspond to the output channels of the corresponding audio tracks, and the types of the audio sub-tracks comprise sound image sub-tracks and sound effect sub-tracks.
8. A movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 7, wherein the method further comprises:
Adding a sound image sub-track and sound image materials, adding one or more sound image materials in the sound image sub-track, and generating a sound image material icon corresponding to the sound image materials in the sound image sub-track, wherein the length of the sound image sub-track occupied by the sound image material icon is matched with the total duration corresponding to the sound image materials;
editing the sound image sub-track attribute;
editing the audio-video material attributes, wherein the audio-video material attributes also comprise a starting point position, an end point position, a starting time, an ending time, a total duration and a playing time length.
9. A movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 7, wherein the method further comprises:
Adding a sound effect sub-track;
Editing the attribute of the sound effect sub-track, wherein the attribute of the sound effect sub-track comprises sound effect processing parameters, and the sound effect of the corresponding output channel of the audio track to which the sound effect sub-track belongs can be adjusted by modifying the sound effect parameters of the sound effect sub-track.
10. A movie-and-stage-oriented multi-professional collaborative editing and control method according to claim 9, characterized in that:
The types of the sound effect sub-tracks comprise a volume and gain sub-track and EQ sub-tracks, wherein each audio track can be provided with one volume and gain sub-track and one or more EQ sub-tracks, the volume and gain sub-tracks are used for adjusting the signal level of an output channel corresponding to the audio track, and the EQ sub-tracks are used for carrying out EQ sound effect processing on the output signals of the output channels corresponding to the audio track;
Before adding the audio materials, an audio material list is obtained from an audio server, and then the audio materials are selected from the audio material list to be added into an audio track.
CN201511030264.2A 2015-12-31 2015-12-31 multi-professional collaborative editing and control method for film, television and stage Active CN106937023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511030264.2A CN106937023B (en) 2015-12-31 2015-12-31 multi-professional collaborative editing and control method for film, television and stage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511030264.2A CN106937023B (en) 2015-12-31 2015-12-31 multi-professional collaborative editing and control method for film, television and stage

Publications (2)

Publication Number Publication Date
CN106937023A CN106937023A (en) 2017-07-07
CN106937023B true CN106937023B (en) 2019-12-13

Family

ID=59444734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511030264.2A Active CN106937023B (en) 2015-12-31 2015-12-31 multi-professional collaborative editing and control method for film, television and stage

Country Status (1)

Country Link
CN (1) CN106937023B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432259B (en) * 2020-03-13 2022-04-19 阿特摩斯科技(深圳)有限公司 Large-scale performance control system based on time code synchronization
CN111654737B (en) * 2020-06-24 2022-07-12 北京嗨动视觉科技有限公司 Program synchronization management method and device
CN114745582A (en) * 2022-03-10 2022-07-12 冯志强 Sound-light-electricity linkage control system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012013858A1 (en) * 2010-07-30 2012-02-02 Nokia Corporation Method and apparatus for determining and equalizing one or more segments of a media track
CN104754244A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Full-scene multi-channel audio control method based on variable domain acoustic image performing effect
CN104750058A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method
CN104754242A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Variable rail sound image processing-based panoramic multi-channel audio control method
CN104754243A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method based on variable domain acoustic image control
CN104750059A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Light control method
CN104754178A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Voice frequency control method
CN104754186A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Device control method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012013858A1 (en) * 2010-07-30 2012-02-02 Nokia Corporation Method and apparatus for determining and equalizing one or more segments of a media track
CN104754244A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Full-scene multi-channel audio control method based on variable domain acoustic image performing effect
CN104750058A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method
CN104754242A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Variable rail sound image processing-based panoramic multi-channel audio control method
CN104754243A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Panoramic multichannel audio frequency control method based on variable domain acoustic image control
CN104750059A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Light control method
CN104754178A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Voice frequency control method
CN104754186A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Device control method

Also Published As

Publication number Publication date
CN106937023A (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN106937022B (en) multi-professional collaborative editing and control method for audio, video, light and machinery
CN106937021B (en) performance integrated control method based on time axis multi-track playback technology
CN104754178B (en) audio control method
US9952739B2 (en) Modular audio control surface
CN106937023B (en) multi-professional collaborative editing and control method for film, television and stage
KR20110020619A (en) Method for play synchronization and device using the same
CN110225224B (en) Virtual image guiding and broadcasting method, device and system
US9094742B2 (en) Event drivable N X M programmably interconnecting sound mixing device and method for use thereof
CN104754186B (en) Apparatus control method
CN101164648A (en) Robot theater
CN101594470A (en) Dynamic recording and broadcasting system of television news studio
CN104750059B (en) Lamp light control method
CN104750058B (en) Panorama multi-channel audio control method
CN104754244B (en) Panorama multi-channel audio control method based on variable domain audio-visual effects
CN104754242B (en) Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image
CN104754243B (en) Panorama multi-channel audio control method based on the control of variable domain acoustic image
CN104750051B (en) Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image
WO2022030259A1 (en) Signal processing device and method, and program
CN104751869B9 (en) Panoramic multi-channel audio control method based on orbital transfer sound image
CN106937205B (en) Complicated sound effect method for controlling trajectory towards video display, stage
CN106937204B (en) Panorama multichannel sound effect method for controlling trajectory
US11528307B2 (en) Near real-time collaboration for media production
US9904505B1 (en) Systems and methods for processing and recording audio with integrated script mode
CN104754241B (en) Panorama multi-channel audio control method based on variable domain acoustic image
CN104750055A (en) Variable rail sound image effect-based panoramic multi-channel audio control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant