WO2021190039A1 - 可拆解和再编辑音频信号的处理方法及装置 - Google Patents

可拆解和再编辑音频信号的处理方法及装置 Download PDF

Info

Publication number
WO2021190039A1
WO2021190039A1 PCT/CN2020/140722 CN2020140722W WO2021190039A1 WO 2021190039 A1 WO2021190039 A1 WO 2021190039A1 CN 2020140722 W CN2020140722 W CN 2020140722W WO 2021190039 A1 WO2021190039 A1 WO 2021190039A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
auxiliary data
input
editing
audio track
Prior art date
Application number
PCT/CN2020/140722
Other languages
English (en)
French (fr)
Inventor
潘兴德
黄旭
谭敏强
Original Assignee
全景声科技南京有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 全景声科技南京有限公司 filed Critical 全景声科技南京有限公司
Publication of WO2021190039A1 publication Critical patent/WO2021190039A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes

Definitions

  • the present disclosure relates to the technical fields of digital signal processing and audio production, and in particular to a processing method and device for disassembling and re-editing audio signals.
  • panoramic sound also known as three-dimensional sound
  • panoramic sound is the most realistic way of presentation and expression of sound. Whether in nature, art or audiovisual entertainment, panoramic sound is the future development trend.
  • Panoramic sound is sometimes called three-dimensional sound, immersive sound, and panoramic sound signals are generally divided into audio data and auxiliary data.
  • Audio data can be mono or multi-channel audio signals, such as mono, stereo, 4.0, 5.1, 7.1, 9.1, 11.1, 13.1, 22.2 and other channels and combinations of the above channel types, such as 7.1 channel signals +4.0 channel signal + 6 stereo signals;
  • auxiliary data is generally used to define the spatial position or rendering method of audio data, which can improve the presentation effect of audio data.
  • three-dimensional positioning information can make the audio more spatial and immersive.
  • sound effects (such as equalizer, reverb, etc.) processing information can make the audio more diversified and enrich the auditory experience.
  • an audio data and its auxiliary data are collectively called a sound object, and audio data without auxiliary data is called a sound bed.
  • the typical panoramic sound technology that has been commercially available can refer to the national three-dimensional panoramic sound standard AVS2-P3 (GB/T 33475.3), the international standard MPEG-H (ISO/IEC 23008-3), Dolby Atmos and WANOS.
  • the audio data can be a mono signal, a stereo signal, a single-layer multi-channel signal, a multi-layer multi-channel signal (that is, a combination of multiple channel signals, distributed in different height planes), and so on.
  • some panoramic sound signals use two levels of the middle layer and the top layer (for example, channel 5.1.4 is a combination of 5.1 and 4.0 channel audio signals, 5.1 channel is in the middle layer, and 4.0 channel is on the top layer), and some
  • the panoramic sound signal uses three layers of planes, etc.; some panoramic sound signals have only multiple layers of audio data, but no auxiliary data, such as SMPTE's 22.2 three-dimensional sound system and AURO 9.1 system, etc.; some panoramic sound signals have multiple layers and multiple channels Signal, there are auxiliary data, such as MPEG-H, Dolby Atmos, WANOS and DTS:X system.
  • the panoramic sound signal can also be all mono or stereo signals and auxiliary data.
  • the panoramic sound format like AAC, AC3, MP3 and other formats, is also a compressed audio format.
  • two types of production tools are commonly used in the production of compressed audio signals:
  • the first category is Digital Audio Workstation (DAW, such as Pro Tools, Nuendo, Cubase, Logic Pro, Adobe Audition, etc.). These softwares are widely used in the production of movies and music, and can use professional audio plug-ins to produce High-quality audio signal.
  • DAW Digital Audio Workstation
  • the second category is some audio and video application software, such as K song, short video, dubbing software and so on. These softwares are widely used in people's lives and change people's daily life and work in a subtle way.
  • This type of audio and video application software supports the editing and production of conventional audio formats (including PCM format, and currently commonly used compressed audio formats such as mp3, aac, wma, ac3, etc.), and can also support the secondary creation of audio signals (such as multiplayer chorus, Ensemble, as well as relay/co-production of a work, etc.), it is highly entertaining and interactive.
  • the input source is a recorded audio signal or imported audio file in a conventional format, if the input is an audio file, it will be decoded into PCM data, and it will be recorded as audio track set B after the addition is completed;
  • each audio track can be configured with one or more auxiliary data; for K song, short video and other software, you can add an auxiliary data to the human voice. After the addition is completed, it is recorded as the auxiliary data set E0;
  • steps 101 to 103 can be performed selectively or repeatedly, And in no order, the audio track set B'and the auxiliary data set E0' will be generated after the production is completed;
  • 104 Encode the produced audio track and auxiliary data into a compressed audio signal S0'. If the output format is AAC, AC3 and other conventional formats, apply the auxiliary data set E0' to the set B'in the production project to generate a pure audio track set B", and encode B" to generate a compressed audio file; if the output format is In the panoramic sound format, the audio track set B'and the auxiliary data set E0' are transmitted to a dedicated panoramic sound encoding device for panoramic sound encoding to generate a panoramic sound signal.
  • Steps 101 to 104 can produce high-quality audio signals, but there are still some shortcomings:
  • the output signal is in panoramic sound format, two physical devices or software systems are required to complete the encoding. So far, there has not been a case where a single software/device is used to achieve editing and encoding at the same time. And the audio track and auxiliary data are transmitted separately.
  • the audio track uses the audio protocol (such as MADI, AES, etc.), and the auxiliary data uses the network protocol (such as TCP/IP, etc.), so the delay of audio data and network data should be considered. The process of synchronization and other issues is more complicated.
  • the output signal is in panoramic sound format, it can only be produced on the PC side at present, and the PC configuration requirements are high, and there is no panoramic sound editing production in interactive applications such as karaoke, short video, dubbing software, etc. Case.
  • DAW can only be used as a professional production system, and output the production results, and the output sound signal is downmixed, and multiple sound elements are mixed in a PCM and cannot be separated.
  • Civil software such as short video and K song can only add or simply process the audio signal that has been downmixed and cannot remove specific sound elements.
  • the present disclosure provides a processing method and device for disassembling and re-editing audio signals. Its technical purpose is to use a physical device to complete the process from the original signal input to the signal output under the condition that the audio can be completely and correctly decoded. The entire production process of, without additional physical equipment and transmission process; each audio track and auxiliary data contained in the code stream can be completely separated during decoding, and any audio track and auxiliary data can be added, deleted, replaced, etc. Or any combination of the three operations.
  • the present disclosure provides a processing method and device for disassembling and re-editing audio signals, which can realize the following functions:
  • a processing method for disassembling and re-editing audio signals including:
  • the input PCM signal can be part or all from the input of a recording device or local storage or network input or any combination of the three inputs.
  • the PCM signal stored locally or input from the network can be obtained by decoding the compressed audio signal.
  • auxiliary data can be obtained by decoding the compressed audio signal.
  • auxiliary data may be a downmix scheme of the audio track, spatial position information, spatial trajectory information, reverberation parameters, equalizer parameters, and the like.
  • auxiliary data can be applied to all or part of the audio tracks in the audio track set.
  • auxiliary data may be fixed or change over time.
  • a processing device capable of disassembling and re-editing audio signals including:
  • the audio editing module includes an audio track editing unit that adds, deletes or replaces the audio track set C1 or any combination of the three methods to generate a new audio track set C1';
  • the auxiliary data adding module adds at least one group of auxiliary data to the audio track set C1' to obtain the auxiliary data set E1';
  • the audio encoding module encodes the audio track set C1' and the auxiliary data set E1' to obtain a compressed sound signal S q '.
  • the audio encoding module encodes the audio track set C1' and the auxiliary data sets E1 and E1' to obtain a compressed sound signal S q ".
  • the audio editing module further includes an auxiliary data editing unit, which adds, deletes, replaces or any combination of the three methods to the auxiliary data set to obtain a new auxiliary data set.
  • the PCM signal input by the PCM input unit may partly or completely come from a recording device input or local storage or network input or any combination of the three inputs.
  • the device further includes a decoding module, the decoding module includes an audio decoding unit, and the PCM signal stored locally or input from the network can be obtained by decoding the compressed audio signal by the audio decoding unit.
  • the decoding module further includes an auxiliary data decoding unit, and the auxiliary data is obtained by decoding the compressed audio signal by the auxiliary data decoding unit.
  • the audio input module inputs the audio signal
  • the auxiliary data adding module can add auxiliary data to the audio track
  • the audio editing module performs processing on any audio track or auxiliary data Add, delete or replace or any combination of the three methods to generate a new audio track set and auxiliary data set.
  • the audio encoding module encodes the audio track and auxiliary data to obtain a compressed sound signal.
  • Figure 1 is a flow chart of an existing audio production method
  • FIG. 2 is a flowchart of Embodiment 1 of the disclosed method
  • Embodiment 3 is a flowchart of Embodiment 2 and Embodiment 3 of the disclosed method
  • Figure 4 is a schematic diagram of the first embodiment of the disclosed device
  • Figure 5 is a schematic diagram of the second embodiment of the disclosed device.
  • Fig. 6 is a schematic diagram of Embodiment 3 of the disclosed device.
  • the PCM audio track data is an independent sound component, rather than a sound component that cannot be disassembled when mixed together. That is to say, the PCM audio track data is independent voice parts or musical instruments or human voices, and it is not that several voice parts, musical instruments or human voices are mixed together and cannot be disassembled.
  • the PCM sound track data may be independent sound components obtained by recording, inputting, decoding, etc., such as independent components of musical instruments such as guitar, bass, drums, keyboard, vocals, violin, etc., or combined PCM data of individual components.
  • the PCM audio track data also allows the mixed sound components that cannot be disassembled as input. However, in this case, a unified sound track can only be made for the mixed sound components that cannot be disassembled. Editing and sound effect editing, but the components in the PCM audio track data cannot be disassembled and processed separately.
  • Embodiment 1 Add shared auxiliary data to the edited audio track.
  • the processing method and device for disassembling and re-editing audio signals can perform editing operations such as adding, deleting, and replacing input audio tracks, and adding one or more shared auxiliary data to all or part of the audio tracks , As shown in Figure 2, includes the following steps:
  • Input m PCM audio track data After inputting, record the total number of existing audio tracks as x, and all audio tracks as track set C[0,...,x-1], where m is greater than or equal to 1.
  • the input audio track data can partly or completely come from recording device input, local storage, network input or any combination of the three inputs.
  • auxiliary data set E'[0,...,n-1] which means each of E' One auxiliary data is simultaneously applied to y audio tracks, that is, E'is shared by y audio tracks; n ⁇ 0, 1 ⁇ y ⁇ x;
  • the operations of adding, deleting, replacing audio tracks and adding auxiliary data can be performed selectively and repeatedly, and there is no order.
  • Audio coding The audio track set C'and its corresponding auxiliary data set E'are jointly encoded into a compressed audio signal S'.
  • the coding technology can refer to the three-dimensional panoramic sound national standard AVS2-P3 (GB/T 33475.3), international Standards MPEG-H (ISO/IEC 23008-3), Dolby Atmos and WANOS, etc.
  • Embodiment 2 Input audio tracks and auxiliary data, and add, delete, and replace multiple types of auxiliary data during editing and production.
  • the processing method and device for disassembling and re-editing audio signals provided by the present invention can perform editing operations such as adding, deleting, and replacing auxiliary data on the basis of Embodiment 1, and can edit various types of auxiliary data, As shown in Figure 3, it includes the following steps:
  • (401) Input data including:
  • the added audio signal can be part or all from the input of recording equipment, local storage, network input or any combination of the three inputs; for local storage and network input, the audio format can be PCM signal, compressed audio signal Or any combination of the two formats. If the added audio signal contains m3 PCM recording tracks, m4 locally imported PCM signals, m5 locally imported compressed audio signals, and m6 network compressed audio signals, then m5 local compressed audio signals will be decoded into m5' A PCM signal and m6 network compressed audio signals are decoded into m6' PCM signals, and the total number of existing audio tracks is recorded as x, and all audio tracks are recorded as a track set C[0,...,x-1].
  • the audio format of the compressed audio signal includes but is not limited to AAC, AC3, MP3, WANOS, Atmos, etc.
  • the decoding technology can refer to AAC (ISO/IEC 13818-7), AC3 (ATSCA/52), MP3, the national standard of three-dimensional panoramic sound AVS2 -P3 (GB/T 33475.3), international standards MPEG-H (ISO/IEC 23008-3), Dolby Atmos and WANOS, etc.
  • auxiliary data Add auxiliary data to the existing audio track and record it as set E.
  • Auxiliary data corresponds to the audio track. It can be applied to a single audio track (such as equalizer, reverb, spatial information, etc.), or it can be applied to multiple audio tracks at the same time (such as downmixing, automatic gain, etc.); from the audio track From the angle of view, each audio track can have one or more auxiliary data, and multiple audio tracks can share one or more auxiliary data at the same time; the sound effects on a single audio track and the sound effects shared by multiple audio tracks can exist at the same time and be combined in any combination .
  • auxiliary data set E4[0,... ,m-1] indicates that the auxiliary data corresponding to each track C[i] is E4[i][0,...,e i -1], e i indicates the current auxiliary data quantity of the i-th track .
  • auxiliary data shared by multiple audio tracks For the auxiliary data shared by multiple audio tracks, the specific operation is: add n auxiliary data to the y audio tracks in the set C, denoted as E5[0,...,n-1], which means each of E5 Auxiliary data are all applied to y audio tracks at the same time, that is, shared by y audio tracks.
  • Add, delete, and replace existing audio tracks and always keep the value of x equal to the number of current audio tracks, and record the created audio track set as C'[0...x-1];
  • the adding operation is the same as the step (401.1);
  • the audio track set C'and its corresponding auxiliary data set E' are jointly encoded into a compressed audio signal S'.
  • the coding technology can refer to the national standard of 3D panoramic sound AVS2-P3 (GB/T 33475.3), the international standard MPEG-H (ISO/IEC 23008-3) and Dolby Atmos, etc.
  • Embodiment 3 The input audio signal contains auxiliary data, and the output audio signal can be produced twice.
  • the processing method and device for disassembling and re-editing audio signals proposed by the present invention can add auxiliary data to each audio track, and can use the produced audio signal (such as the final output signal S'of the second embodiment) as input
  • the source is made a second time, as shown in Figure 3, including the following steps:
  • (501) Input m7 compressed audio signals containing auxiliary data.
  • Decode m7 audio signals (the decoding technology can refer to the national standard of three-dimensional panoramic sound AVS2-P3 (GB/T 33475.3), the international standard MPEG-H (ISO/IEC 23008-3), Dolby Atmos and WANOS, etc.) and include them
  • the audio track data and auxiliary data are completely separated, generating m8 PCM audio tracks and m9 auxiliary data.
  • m8 audio tracks as the set C[0,...,m8-1]; divide the m9 auxiliary data according to the audio tracks, denote the set E[0,...,m8-1], which means m9
  • the auxiliary data can also change with time (such as spatial location information, refer to the national standard GB/T 33475.3, DolbyAtmos, etc.) or fixed (such as equalizer parameters).
  • Adding, deleting, and replacing operations can be performed selectively and repeatedly, and there is no order.
  • Each audio track can add one or more auxiliary data, because the audio track can have no auxiliary data, one auxiliary data, or multiple auxiliary data, which means that the auxiliary data set E1' is actually in the audio track set C1'
  • the collection of auxiliary data contained in all audio tracks Generally speaking, the audio track without auxiliary data is called the sound bed, and the audio track with auxiliary data is called the sound object.
  • the sound object and the sound bed may be changed.
  • the sound track and the sound bed in the changed sound object form a new sound track set, and all the sound objects in the changed sound object
  • the auxiliary data forms a new auxiliary data set, that is, the changed sound object and sound bed are encoded to obtain a compressed sound signal.
  • FIG. 4 is a schematic diagram of the first embodiment of the device.
  • the device includes an audio input module, an audio editing module, an auxiliary data adding module, and an audio encoding module.
  • the audio editing module includes an audio track editing unit, which adds, deletes or replaces the audio track set C1 or any combination of the three methods to generate a new audio track set C1';
  • the auxiliary data adding module is an audio track set C1' adds at least one set of auxiliary data to obtain the auxiliary data set E1';
  • the audio coding module encodes the audio track set C1' and the auxiliary data set E1' to obtain the compressed sound signal S q '.
  • Figure 5 is a schematic diagram of the second embodiment of the device.
  • the audio input module further includes an auxiliary data input unit.
  • the auxiliary data input unit inputs the auxiliary data set E1.
  • E1 can be several audio tracks in C1.
  • the shared group of auxiliary data can also be a collection of auxiliary data added by different audio tracks in C1.
  • the audio coding module encodes C1', E1 and E1' to obtain a compressed sound signal S q ".
  • Fig. 6 is a schematic diagram of the third embodiment of the device.
  • the audio input module further includes a compressed signal input unit, and the compressed signal is decoded by the decoding module after input.
  • the decoding module also includes an audio decoding unit and an auxiliary data decoding unit. If the input signal is a compressed audio signal (such as local storage or network input), the audio decoding unit can decode the input signal to obtain the corresponding PCM data; if the input is compressed
  • the signal also contains auxiliary data, and the auxiliary data decoding unit can decode the input signal to obtain the auxiliary data.
  • the audio editing module also includes an auxiliary data editing unit, which adds, deletes or replaces the auxiliary data set or any combination of the three methods to obtain a new auxiliary data set.
  • the output of the audio encoding module is input to the audio input module.
  • the input PCM signal may be part or all from the input of a recording device or local storage or network input or any combination of the three inputs.
  • the number of channels of the audio signal input by the audio input module includes mono, stereo, 4.0, 5.1, 7.1, 9.1, 11.1, 13.1, and 22.2 channels. Channels and any combination of the above-mentioned channel types.
  • the auxiliary data may be a downmix scheme of the audio track, spatial position information, spatial trajectory information, reverberation parameters, equalizer parameters, and the like.
  • the auxiliary data may be applied to all or part of the audio track in the audio track set.
  • auxiliary data adding module adds auxiliary data or not does not affect the implementation of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

一种可拆解和再编辑音频信号的处理方法及装置,涉及数字信号处理和音频制作技术领域,解决了在保证压缩音频能够完整、正确解码的情况下,不能用一个物理设备完成从原始信号输入到压缩声音信号输出的全部生产流程,从而需要额外的物理设备及传输过程的技术问题,技术方案要点是音频输入模块输入音频信号,辅助数据添加模块为任意音轨添加辅助数据;音频编辑模块对任意音轨进行添加、删除或替换生成新的音轨集合,音频编码模块则对音轨、辅助数据进行编码得到压缩声音信号。能够用一个物理设备完成从原始信号输入到压缩声音信号输出的全部生产流程,并对任意音轨进行添加、删除、替换等操作。

Description

可拆解和再编辑音频信号的处理方法及装置 技术领域
本公开涉及数字信号处理和音频制作技术领域,尤其涉及一种可拆解和再编辑音频信号的处理方法及装置。
背景技术
音频技术经过多年发展,立体声、5.1、7.1环绕声等系统已经获得了广泛的应用,但这些系统因缺乏声音的高度信息,最多只能呈现二维的声音。在真实的世界中,全景声(也称三维声)是声音最真实的呈现和表达方式,无论自然界、艺术领域或视听娱乐领域,全景声都是未来的发展趋势。
全景声有时也被称为三维声、沉浸声,全景声信号一般分为音频数据和辅助数据。音频数据可以是单声道或多声道音频信号,如单声道、立体声、4.0、5.1、7.1、9.1、11.1、13.1、22.2等声道以及上述声道类型的组合,如7.1声道信号+4.0声道信号+6个立体声信号;辅助数据一般用于定义音频数据的空间位置或渲染方式,能够提升音频数据的呈现效果,比如三维定位信息能使音频的空间感、沉浸感更强,而音效(如均衡器、混响等)处理信息则能使音频更加多元化,丰富听觉体验。有时,也将一个音频数据及其辅助数据统一称为声音对象,将没有辅助数据的音频数据称为声床。目前已经商用的典型全景声技术可以参考三维全景声国家标准AVS2-P3(GB/T 33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)、Dolby Atmos和WANOS等。
在全景声信号中,音频数据可以是单声道信号、立体声信号、单层多 声道信号、多层多声道信号(即多个声道信号组合,分布在不同高度平面)等。例如,有些全景声信号采用中间层及顶层的两层平面(如5.1.4声道就是5.1和4.0两种声道音频信号的组合,5.1声道在中间层,4.0声道在顶层),有些全景声信号则采用三层平面等;有些全景声信号只有多层音频数据,但没有辅助数据,例如SMPTE的22.2三维声系统和AURO 9.1系统等;有些全景声信号则既有多层多声道信号,也有辅助数据,例如MPEG-H、Dolby Atmos、WANOS和DTS:X系统。当然,全景声信号也可以全部是单声道或立体声信号和辅助数据。
全景声音格式和AAC、AC3、MP3等格式一样,也属于压缩音频格式。目前在制作压缩音频信号时普遍采用两类制作工具:
第一类是数字音频工作站(Digital Audio Workstation,DAW,比如Pro Tools、Nuendo、Cubase、Logic Pro、Adobe Audition等),这些软件广泛应用于电影和音乐的制作,能够使用专业的音频插件,制作出高质量的音频信号。
第二类是一些音视频应用软件,如K歌、短视频、配音软件等等。这些软件广泛深入大众生活,以潜移默化的方式改变着人们的日常生活和工作。这类音视频应用软件支持常规音频格式(包括PCM格式,以及mp3、aac、wma、ac3等目前常用的压缩音频格式)的编辑制作,同时能够支持音频信号的二次创作(如多人合唱、合奏,以及接力/协同制作一部作品等),具有很强的娱乐性和互动性。
音频信号的制作方法如图1所示,其具体步骤如下:
101:添加音频数据(以下简称音轨),输入来源为录音的音频信号或 导入常规格式音频文件,如果输入的是音频文件则将其解码成PCM数据,添加完成后记作音轨集合B;
102:添加辅助数据。对于DAW,每个音轨可以配置一个或多个辅助数据;对于K歌、短视频等软件,可以给人声添加一个辅助数据。添加完成后记作辅助数据集合E0;
103:进行编辑制作,对音轨集合B中的任意音轨和辅助数据集合E0中的任意辅助数据进行编辑制作,包括添加、删除、替换操作;步骤101至103可以选择性进行或重复进行,并且无先后顺序,制作完成后生成音轨集合B'和辅助数据集合E0';
104:将制作后的音轨和辅助数据编码成压缩音频信号S0'。如果输出格式是AAC、AC3等常规格式,则在制作工程中将辅助数据集合E0'应用到集合B',生成纯音轨集合B”,并将B”编码生成压缩音频文件;如果输出格式为全景声格式则将音轨集合B'和辅助数据集合E0'传输至专用全景声编码设备进行全景声编码,生成全景声信号。
步骤101至104能够制作出高质量的音频信号,但是仍然存在一些不足:
(1)如果输出信号是全景声格式,则在编码时需要两台物理设备或软件系统才能完成,迄今还没有用单一软件/设备同时实现编辑和编码的案例。且音轨和辅助数据是分开传输的,音轨使用音频协议(如MADI、AES等),辅助数据使用网络协议(如TCP/IP等),所以还要考虑音频数据和网络数据的延时、同步等问题,流程较复杂。
(2)如果输出信号是全景声格式,目前只能在PC端进行制作,而且 对PC配置要求较高,还没有在K歌、短视频、配音软件等交互性应用中实现全景声编辑制作的案例。
(3)进一步的,DAW只能作为专业的制作系统,并输出制作结果,且输出的声音信号是经过缩混的,多种声音元素混合在一个PCM中,无法分离。短视频、K歌等民用软件只能对已经缩混成型的音频信号做添加或简单的加工,无法去除特定的声音元素。
(4)在互联网应用中,有时需要将输出的压缩音频信号S0'作为新的输入信号,并在S0'基础上进行临时修改或二次创作。此时,S0'中的各成分无法拆解并分别执行添加、删除、替换等编辑操作,只能将S0'当做整体来进行编辑制作,无法去除或替换特定的声音成分,更无法修改特定声音成分的音效。例如,对于一首摇滚乐,现有的DAW通常将吉他、贝斯、鼓、键盘和人声等声部缩混到2个或5.1制式的PCM声道中,并编码输出。编码后的摇滚乐即使解码,也无法再分离吉他、贝斯、鼓、键盘和人声等声部,更不允许对其中特定的声部做删除、替换,也不允许去除或修改原来添加到部分或全部作品上的音效,如混响、EQ、压限等。唯一能做的,就是在原来摇滚乐作品的基础上,再增加声部,或者对原来制作好的摇滚乐,做整体的音效处理。
综上所述,迄今还没有一个独立的物理设备(或软件或方法),能够实现如下功能:
(1)能够将声音的解码、音轨的编辑制作、辅助数据(含音效)的编辑制作和编码在一个物理设备(或软件)中完成,而不需要额外的物理设备(或软件)和数据传输;
(2)能够在任何时间、地点,由任何人将每个声音成分独立编码、编辑和解码,而不与其他声音成分混合在一起;
(3)能够在任何时间、地点,由任何人将单个声音成分或部分声音成分或全部声音成分的空间信息、渲染信息、增益、混响、均衡等辅助数据任意解码、编辑和编码,而不与其他声音信息混合在一起无法分离;
(4)能够实现在各种设备,如DAW、K歌软件、视频软件、配音软件等应用中兼容,即由任何人(专业及业务人士),在任何时间和任何地点,任意的解码、编辑、编码和分享同一个声音作品。
发明内容
本公开提供了一种可拆解和再编辑音频信号的处理方法及装置,其技术目的是在保证音频能够完整、正确解码的情况下,能够用用一个物理设备完成从原始信号输入到信号输出的全部生产流程,无需额外的物理设备及传输过程;在解码时能够将码流中包含的每个音轨和辅助数据完全分离,并对任意音轨和辅助数据进行添加、删除、替换等操作或三种操作的任意组合。本公开提供了一种可拆解和再编辑音频信号的处理方法及装置可以实现如下功能:
1.能够将声音的解码、音轨的编辑制作、辅助数据(含音效)的编辑制作和编码在一个物理设备(或软件)中完成,而不需要额外的物理设备(或软件)和数据传输;
2.能够在任何时间、地点,由任何人将每个声音成分独立编码、编辑和解码,而不与其他声音成分混合在一起;
3.能够在任何时间、地点,由任何人将单个声音成分或部分声音成分 或全部声音成分的空间信息、渲染信息、增益、混响、均衡等辅助数据任意解码、编辑和编码,而不与其他声音信息混合在一起无法分离;
4.能够实现在各种设备(如DAW、K歌软件、视频软件、配音软件等应用)中实现兼容,即由任何人(专业及业务人士),在任何时间和任何地点,使用本公开方法或装置可任意的解码、编辑、编码和分享同一个声音作品。
本公开的上述技术目的是通过以下技术方案得以实现的:
一种可拆解和再编辑音频信号的处理方法,包括:
输入m1个PCM信号,m1大于0,则m1个所述PCM信号即为音轨集合C1,则C1={C 1i},0≤i≤m1-1;
对所述音轨集合C1进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C1';
为所述音轨集合C1'至少添加一组辅助数据,得到辅助数据集合E1';
对所述音轨集合C1'和辅助数据集合E1'进行编码得到压缩声音信号S q'。
进一步地,包括:
输入m2个辅助数据,m2大于0,则有辅助数据集合E1={E 1j},0≤j≤m2-1;
对所述音轨集合C1'和辅助数据集合E1以及E1'进行编码得到压缩声音信号S q”。
进一步地,包括:
输入n3个PCM信号和n4个辅助数据,n3和n4均大于0,则有音轨集合为C3={C 3k},0≤k≤n3-1,辅助数据集合则为E3={E 3t},0≤t≤n4-1;
对于所述音轨集合C3进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C3';
对所述辅助数据集合E3进行添加、删除或替换或三种方式的任意组合,得到辅助数据集合E3';
对所述音轨集合C3'和辅助数据集合E3'进行编码得到压缩声音信号S q”'。
进一步地,输入的PCM信号可以部分或全部来自录音设备输入或本地存储或网络输入或三种输入的任意组合。
进一步地,本地存储或网络输入的PCM信号可经过压缩音频信号解码获得。
进一步地,辅助数据可通过压缩音频信号解码获得。
进一步地,所述辅助数据可以是音轨的缩混方案、空间位置信息、空间轨迹信息、混响参数、均衡器参数等。
进一步地,所述辅助数据可以作用于音轨集合的全部音轨或部分音轨。
进一步地,所述辅助数据可以是固定不变的,也可以随着时间变化。
一种可拆解和再编辑音频信号的处理装置,包括:
音频输入模块,包括PCM输入单元,所述PCM输入单元输入m1个PCM信号,m1大于0,则m1个所述PCM信号即为音轨集合C1,则C1={C 1i},0≤i≤m1-1;
音频编辑模块,包括音轨编辑单元,所述音轨编辑单元对所述音轨集合C1进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C1';
辅助数据添加模块,为所述音轨集合C1'至少添加一组辅助数据,得到辅助数据集合E1';
音频编码模块,对所述音轨集合C1'和辅助数据集合E1'进行编码得到压缩声音信号S q'。
进一步地,所述音频输入模块还包括辅助数据输入单元,所述辅助数据输入单元输入m2个辅助数据,m2大于0,则有辅助数据集合E1={E 1j},0≤j≤m2-1;
所述音频编码模块对所述音轨集合C1'和辅助数据集合E1以及E1'进行编码得到压缩声音信号S q”。
进一步地,所述音频编辑模块还包括辅助数据编辑单元,所述辅助数据编辑单元对所述辅助数据集合进行添加、删除或替换或三种方式的任意组合,得到新的辅助数据的集合。
进一步地,所述PCM输入单元输入的PCM信号可以部分或全部来自录音设备输入或本地存储或网络输入或三种输入的任意组合。
进一步地,该装置还包括解码模块,所述解码模块包括音频解码单元,本地存储或网络输入的PCM信号可通过所述音频解码单元解码压缩音频信号获得。
进一步地,所述解码模块还包括辅助数据解码单元,辅助数据通过所述辅助数据解码单元解码压缩音频信号获得。
本公开的有益效果在于:本公开所述的音频信号的处理方法及装置,音频输入模块输入音频信号,辅助数据添加模块能为音轨添加辅助数据;音频编辑模块对任意音轨或辅助数据进行添加、删除或替换或三种方式的任意组合,从而生成新的音轨集合和辅助数据集合,音频编码模块则对音轨、辅助数据进行编码得到压缩声音信号。
能够用一个物理设备完成从原始信号输入到压缩声音信号输出的全部生产流程,无需额外的物理设备及传输过程;并对任意音轨和辅助数据进行添加、删除、替换等操作或三种操作的任意组合。
附图说明
图1为现有音频制作方法流程图;
图2为本公开方法实施例一流程图;
图3为本公开方法实施例二、实施例三流程图;
图4为本公开装置实施例一示意图;
图5为本公开装置实施例二示意图;
图6为本公开装置实施例三示意图。
具体实施方式
下面将结合附图对本公开技术方案进行详细说明。
在本公开的描述中,需要理解的是,所述的PCM(Pulse-code modulation,脉冲编码调制)音轨数据是独立的声音成分,而不是混合在一起无法拆解的声音成分。即所述的PCM音轨数据是独立声部或乐器或人声,不是几个声部或乐器或人声混合在一起无法拆解的。所述的PCM音轨数据可以为录音、输入、解码等获得的独立声音成分,如吉他、贝斯、鼓、 键盘、人声、小提琴等乐器、声部的独立成分或其组合的PCM数据。作为本发明的的特例,所述的PCM音轨数据也允许混合在一起无法拆解的声音成分作为输入,但此种情况将只能对混合在一起无法拆解的声音成分做统一的音轨编辑和音效编辑,而不能对该PCM音轨数据中的成分再拆解和分别处理。
实施例一:为编辑后的音轨添加共享辅助数据。
本发明提出的可拆解和再编辑音频信号的处理方法和装置,能够对输入音轨进行添加、删除、替换等编辑操作,并对全部音轨或部分音轨添加一个或多个共享辅助数据,如图2所示,包括如下步骤:
(301)输入m个PCM音轨数据,输入后将现有音轨总数记作x,所有音轨记作音轨集合C[0,...,x-1],m大于等于1。输入的音轨数据可以部分或全部来自录音设备输入、本地存储、网络输入或三种输入的任意组合。
(302)编辑制作:对现有音轨进行添加、删除、替换操作,并始终保持x的值等同于当前音轨数量,将制作后的音轨集合记作C[0,...,x-1],音轨的添加操作同步骤(301);
(303)同时可以对制作后音轨集合C'中的y个音轨添加n个辅助数据,记作辅助数据集合E'[0,...,n-1],表示E'中的每一个辅助数据都同时作用在y个音轨上,即E'由y个音轨共享;n≥0,1≤y≤x;
音轨的添加、删除、替换操作以及辅助数据的添加等操作均可选择性进行以及重复进行,并且无先后顺序。
(304)音频编码:将音轨集合C'及其对应的辅助数据集合E'共同编码成压缩音频信号S',编码技术可参考三维全景声国家标准AVS2-P3(GB/T 33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)、Dolby Atmos和WANOS等。
实施例二:输入音轨和辅助数据,并在编辑制作时添加、删除、替换多种类型的辅助数据。
本发明提出的可拆解和再编辑音频信号的处理方法和装置,能够在实施例1的基础上,对辅助数据进行添加、删除、替换等编辑操作,并且能够编辑多种类型的辅助数据,如图3所示,包括如下步骤:
(401)输入数据,包括:
(401.1)添加音频信号:添加的音频信号可以部分或全部来自录音设备输入、本地存储、网络输入或三种输入的任意组合;对于本地存储和网络输入,音频格式可以是PCM信号、压缩音频信号或两种格式的任意组合。若添加的音频信号中包含m3个PCM录音音轨、m4个本地导入的PCM信号、m5个本地导入的压缩音频信号以及m6个网络压缩音频信号,则将m5个本地压缩音频信号解码成m5'个PCM信号、m6个网络压缩音频信号解码成m6'个PCM信号,并将现有音轨总数记作x,所有音轨记作音轨集合C[0,...,x-1]。m3、m4、m5、m6均大于等于0,m3+m4+m5+m6≥1,m5'≥m5,m6'≥m6,x=m3+m4+m5'+m6';本地压缩音频信号和网络压缩音频信号的音频格式包括但不限于AAC、AC3、MP3、WANOS、Atmos等,解码技术可参考AAC(ISO/IEC 13818-7)、AC3(ATSCA/52)、MP3、三维全景声国家标准AVS2-P3(GB/T 33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)、Dolby Atmos和WANOS等。
(401.2)添加辅助数据。为现有音轨添加辅助数据,记作集合E。辅 助数据和音轨对应,可以作用在单个音轨上(如均衡器、混响、空间信息等),也可以同时作用在多个音轨上(如缩混、自动增益等);从音轨的角度,每个音轨可以拥有一个或多个辅助数据,多个音轨可以同时共享一个或多个辅助数据;单个音轨上的音效以及多个音轨共享的音效可以同时存在并任意组合。
对于单个音轨上的辅助数据,具体操作是:为现有音轨集合C中的任意音轨添加m个辅助数据,并按照音轨来划分,记作辅助数据集合E4[0,...,m-1],表示每一个音轨C[i]对应的辅助数据是E4[i][0,...,e i-1],e i表示第i个音轨的当前辅助数据数量。对于多个音轨共享的辅助数据,具体操作是:为集合C中的y个音轨添加n个辅助数据,记作E5[0,...,n-1],表示E5中的每一个辅助数据都同时作用在y个音轨上,即由y个音轨共享。m≥0,n≥0,m+n≥1,e i≥0(e i=0时表示第i个音轨上没有辅助数据),0≤i<x,1≤y≤x(y=x时表示E5中的辅助数据作用在C中的全部音轨上,1≤y<x时表示E5中的辅助数据作用在C中的部分音轨上),E=E4+E5。
(402)编辑制作
对现有音轨进行添加、删除、替换操作,并始终保持x的值等同于当前音轨数量,将制作后的音轨集合记作C'[0...x-1];音轨的添加操作同步骤(401.1);
对现有辅助数据进行添加、删除、替换操作,并始终保持e i的值等同于第i个音轨的辅助数据数量,将制作后的辅助数据集合记作E'[0...x-1],辅助数据的添加操作同步骤(401.2);
音轨和辅助数据的添加、删除、替换操作均可选择性进行以及重复进行,并且无先后顺序。
(403)音频编码。将音轨集合C'及其对应的辅助数据集合E'共同编码成压缩音频信号S'。编码技术可参考三维全景声国家标准AVS2-P3(GB/T 33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)和Dolby Atmos等。
实施例三:输入的音频信号中包含辅助数据,并能够对输出的音频信号进行二次制作。
本发明提出的可拆解和再编辑音频信号的处理方法和装置,可以为每个音轨添加辅助数据,并且可以将已制作的音频信号(如实施例二的最终输出信号S')作为输入来源进行二次制作,同样如图3所示,包括如下步骤:
(501)输入m7个包含辅助数据的压缩音频信号。将m7个音频信号解码(解码技术可参考三维全景声国家标准AVS2-P3(GB/T 33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)、Dolby Atmos和WANOS等),将其包含的音轨数据和辅助数据完全分离,生成m8个PCM音轨以及m9个辅助数据。将m8个音轨记作集合C[0,...,m8-1];将m9个辅助数据按照音轨划分,记作集合E[0,...,m8-1],表示m9个辅助数据和m8个音轨对应,第i个音轨对应的辅助数据是E[i][0,...,e i-1];1≤m7≤m8,0≤i<m8,e i≥0(e i=0时表示第i个音轨上没有辅助数据),m9>0,Σe i=m9;
将当前音轨数量记作x,则此时x=m8;
(502)在音轨集合C和辅助数据集合E的基础上进行编辑制作,包括但不限于:
对现有音轨进行添加、删除、替换操作。并始终保持:x的值等同于当前音轨数量;C中的内容是当前所有音轨。
对现有辅助数据进行添加、删除、替换操作,并始终保持:e i的值等同于第i个音轨的辅助数据数量;E中内容是当前每个音轨对应的辅助数据。辅助数据除(401.2)所述特点之外,还可以随着时间变化(比如空间位置信息,参考国家标准GB/T 33475.3、DolbyAtmos等)或固定不变(比如均衡器参数)。
将制作后的音轨集合记作C'[0,...,x-1],辅助数据集合记作E'[0,...,x-1]。
添加、删除、替换操作均可选择性进行以及重复进行,并且无先后顺序。
(503)音频编码。将音轨集合C'及其对应的辅助数据集合E'共同编码成压缩音频信号S'。编码时,可将固定的辅助数据和随时间变化的辅助数据进行不同处理,具体可参考三维全景声国家标准AVS2-P3(GB/T33475.3)、国际标准MPEG-H(ISO/IEC 23008-3)和Dolby Atmos等。
(504)二次制作。若对S'进行临时修改或二次制作(比如多人合唱/合奏、多人接力/协同完成一部作品等),则以S'作为信号输入来源,重复步骤(501)至(503),直至制作完毕,由(503)输出最终的压缩音频信号。
每个音轨可以添加一个或多个辅助数据,因为音轨可以没有辅助数据,可以有一个辅助数据,也可以有多个辅助数据,意即辅助数据集合E1'实际为音轨集合C1'中所有音轨包含的辅助数据的集合,一般而言,没有辅 助数据的音轨称之为音床,有辅助数据的音轨称之为声音对象。
对音轨及其辅助进行添加、删除或替换后,声音对象和音床都有可能改变,改变后的声音对象中的音轨和音床就组成新的音轨集合,改变后的声音对象中的所有辅助数据则组成新的辅助数据集合,亦即对改变后的声音对象和音床进行编码得到压缩声音信号。
图4为本装置实施例一示意图,该装置包括音频输入模块、音频编辑模块、辅助数据添加模块和音频编码模块,音频输入模块包括PCM输入单元,该PCM输入单元输入PCM信号,例如,输入m1个PCM信号,m1大于0,则该m1个PCM信号即为音轨集合C1,有C1={C 1i},0≤i≤m1-1。
音频编辑模块包括音轨编辑单元,该音轨编辑单元对音轨集合C1进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C1';辅助数据添加模块为音轨集合C1'至少添加一组辅助数据,得到辅助数据集合E1';音频编码模块对音轨集合C1'和辅助数据集合E1'进行编码得到压缩声音信号S q'。
图5为本装置实施例二的示意图,在装置实施例一的基础上,音频输入模块还包括辅助数据输入单元,该辅助数据输入单元输入辅助数据集合E1,E1可以是C1中若干个音轨共享的一组辅助数据,也可以是C1中不同音轨各自添加的辅助数据的集合,最后音频编码模块对C1'、E1和E1'进行编码得到压缩声音信号S q”。
图6为本装置实施例三的示意图,音频输入模块还包括压缩信号输入单元,压缩信号输入后由解码模块进行解码。解码模块又包括音频解码单 元和辅助数据解码单元,若输入信号为压缩音频信号(如本地存储或网络输入),则音频解码单元可将该输入信号解码,得到对应的PCM数据;若输入的压缩信号中还包含辅助数据,则辅助数据解码单元可将该输入信号解码,获得该辅助数据。
音频编辑模块还包括辅助数据编辑单元,该辅助数据编辑单元对辅助数据集合进行添加、删除或替换或三种方式的任意组合,得到新的辅助数据的集合。
作为具体实施例地,音频编码模块的输出输入到音频输入模块。
作为具体实施例地,输入的PCM信号可以部分或全部来自录音设备输入或本地存储或网络输入或三种输入的任意组合。
作为具体实施例地,音频输入模块输入的音频信号的声道数包括单声道、立体声、4.0声道、5.1声道、7.1声道、9.1声道、11.1声道、13.1声道、22.2声道以及上述声道种类的任意组合形式。
作为具体实施例地,辅助数据可以是音轨的缩混方案、空间位置信息、空间轨迹信息、混响参数、均衡器参数等。
作为具体实施例地,辅助数据可以作用于音轨集合的全部音轨或部分音轨。
作为具体实施例地,辅助数据添加模块添加辅助数据与否,不影响本公开的实施。
以上为本公开示范性实施例,本公开的保护范围由权利要求书及其等效物限定。

Claims (15)

  1. 一种可拆解和再编辑音频信号的处理方法,其特征在于,包括:
    输入m1个PCM信号,m1大于0,则m1个所述PCM信号即为音轨集合C1,则C1={C 1i},0≤i≤m1-1;
    对所述音轨集合C1进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C1';
    为所述音轨集合C1'至少添加一组辅助数据,得到辅助数据集合E1';
    对所述音轨集合C1'和辅助数据集合E1'进行编码得到压缩声音信号S q'。
  2. 如权利要求1所述的可拆解和再编辑音频信号的处理方法,其特征在于,包括:
    输入m2个辅助数据,m2大于0,则有辅助数据集合E1={E 1j},0≤j≤m2-1;
    对所述音轨集合C1'和辅助数据集合E1以及E1'进行编码得到压缩声音信号S q”。
  3. 如权利要求2所述的可拆解和再编辑音频信号的处理方法,其特征在于,包括:
    输入n3个PCM信号和n4个辅助数据,n3和n4均大于0,则有音轨集合为C3={C 3k},0≤k≤n3-1,辅助数据集合则为E3={E 3t},0≤t≤n4-1;
    对于所述音轨集合C3进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C3';
    对所述辅助数据集合E3进行添加、删除或替换或三种方式的任意组合,得到辅助数据集合E3';
    对所述音轨集合C3'和辅助数据集合E3'进行编码得到压缩声音信号S q”'。
  4. 如权利要求1-3任一所述的可拆解和再编辑音频信号的处理方法,其特征在于,输入的PCM信号可以部分或全部来自录音设备输入或本地存储或网络输入或三种输入的任意组合。
  5. 如权利要求4所述的可拆解和再编辑音频信号的处理方法,其特征在于,本地存储或网络输入的PCM信号可经过压缩音频信号解码获得。
  6. 如权利要求5任一所述的可拆解和再编辑音频信号的处理方法,其特征在于,辅助数据可通过压缩音频信号解码获得。
  7. 如权利要求1-3任一所述的可拆解和再编辑音频信号的处理方法,其特征在于,所述辅助数据可以是音轨的缩混方案、空间位置信息、空间轨迹信息、混响参数、均衡器参数等。
  8. 如权利要求1-3任一所述的可拆解和再编辑音频信号的全景声处理方法,其特征在于,所述辅助数据可以作用于音轨集合的全部音轨或部分音轨。
  9. 如权利要求1-3任一所述的可拆解和再编辑音频信号的全景声处理方法,其特征在于,所述辅助数据可以是固定不变的,也可以随着时间变化。
  10. 一种可拆解和再编辑音频信号的处理装置,其特征在于,包括:
    音频输入模块,包括PCM输入单元,所述PCM输入单元输入m1个PCM信号,m1大于0,则m1个所述PCM信号即为音轨集合C1,则C1={C 1i},0≤i≤m1-1;
    音频编辑模块,包括音轨编辑单元,所述音轨编辑单元对所述音轨集合C1进行添加、删除或替换或三种方式的任意组合,生成新的音轨集合C1';
    辅助数据添加模块,为所述音轨集合C1'至少添加一组辅助数据,得到辅助数据集合E1';
    音频编码模块,对所述音轨集合C1'和辅助数据集合E1'进行编码得到压缩声音信号S q'。
  11. 如权利要求10所述的可拆解和再编辑音频信号的处理装置,其特征在于,所述音频输入模块还包括辅助数据输入单元,所述辅助数据输入单元输入m2个辅助数据,m2大于0,则有辅助数据集合E1={E 1j},0≤j≤m2-1;
    所述音频编码模块对所述音轨集合C1'和辅助数据集合E1以及E1'进行编码得到压缩声音信号S q”。
  12. 如权利要求11所述的可拆解和再编辑音频信号的处理装置,其特征在于,所述音频编辑模块还包括辅助数据编辑单元,所述辅助数据编辑单元对所述辅助数据集合进行添加、删除或替换或三种方式的任意组合,得到新的辅助数据的集合。
  13. 如权利要求10-12任一所述的可拆解和再编辑音频信号的处理装置,其特征在于,所述PCM输入单元输入的PCM信号可以部分或全部来自录音设备输入或本地存储或网络输入或三种输入的任意组合。
  14. 如权利要求13所述的可拆解和再编辑音频信号的处理装置,其特征在于,该装置还包括解码模块,所述解码模块包括音频解码单元,本地 存储或网络输入的PCM信号可通过所述音频解码单元解码压缩音频信号获得。
  15. 如权利要求14所述的可拆解和再编辑音频信号的处理装置,其特征在于,所述解码模块还包括辅助数据解码单元,辅助数据通过所述辅助数据解码单元解码压缩音频信号获得。
PCT/CN2020/140722 2020-03-23 2020-12-29 可拆解和再编辑音频信号的处理方法及装置 WO2021190039A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010209390.9A CN111445914B (zh) 2020-03-23 2020-03-23 可拆解和再编辑音频信号的处理方法及装置
CN202010209390.9 2020-03-23

Publications (1)

Publication Number Publication Date
WO2021190039A1 true WO2021190039A1 (zh) 2021-09-30

Family

ID=71650637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140722 WO2021190039A1 (zh) 2020-03-23 2020-12-29 可拆解和再编辑音频信号的处理方法及装置

Country Status (2)

Country Link
CN (1) CN111445914B (zh)
WO (1) WO2021190039A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445914B (zh) * 2020-03-23 2023-10-17 全景声科技南京有限公司 可拆解和再编辑音频信号的处理方法及装置
CN113691860B (zh) * 2021-07-19 2023-12-08 北京全景声信息科技有限公司 一种ugc媒体内容的生成方法、装置、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004029377A (ja) * 2002-06-26 2004-01-29 Namco Ltd 圧縮データ処理装置、方法および圧縮データ処理プログラム
JP2005114890A (ja) * 2003-10-06 2005-04-28 Alpine Electronics Inc オーディオ信号圧縮装置
CN102682776A (zh) * 2012-05-28 2012-09-19 深圳市茁壮网络股份有限公司 一种音频数据的处理方法和服务器
CN105336348A (zh) * 2015-11-16 2016-02-17 合一网络技术(北京)有限公司 视频编辑中多音频轨道的处理系统及方法
CN108550377A (zh) * 2018-03-15 2018-09-18 北京雷石天地电子技术有限公司 一种音轨快速切换的方法及系统
CN111445914A (zh) * 2020-03-23 2020-07-24 全景声科技南京有限公司 可拆解和再编辑音频信号的处理方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136022A (zh) * 2006-09-01 2008-03-05 李筑 资源信息的全景制作和展示系统
JP2008225232A (ja) * 2007-03-14 2008-09-25 Crimson Technology Inc 信号処理方法および音声コンテンツ配信方法
US8571879B2 (en) * 2008-01-21 2013-10-29 Panasonic Corporation Sound reproducing device adding audio data to decoded sound using processor selected based on trade-offs
US9916836B2 (en) * 2015-03-23 2018-03-13 Microsoft Technology Licensing, Llc Replacing an encoded audio output signal
DE102017103134B4 (de) * 2016-02-18 2022-05-05 Google LLC (n.d.Ges.d. Staates Delaware) Signalverarbeitungsverfahren und -systeme zur Wiedergabe von Audiodaten auf virtuellen Lautsprecher-Arrays
CN108550369B (zh) * 2018-04-14 2020-08-11 全景声科技南京有限公司 一种可变长度的全景声信号编解码方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004029377A (ja) * 2002-06-26 2004-01-29 Namco Ltd 圧縮データ処理装置、方法および圧縮データ処理プログラム
JP2005114890A (ja) * 2003-10-06 2005-04-28 Alpine Electronics Inc オーディオ信号圧縮装置
CN102682776A (zh) * 2012-05-28 2012-09-19 深圳市茁壮网络股份有限公司 一种音频数据的处理方法和服务器
CN105336348A (zh) * 2015-11-16 2016-02-17 合一网络技术(北京)有限公司 视频编辑中多音频轨道的处理系统及方法
CN108550377A (zh) * 2018-03-15 2018-09-18 北京雷石天地电子技术有限公司 一种音轨快速切换的方法及系统
CN111445914A (zh) * 2020-03-23 2020-07-24 全景声科技南京有限公司 可拆解和再编辑音频信号的处理方法及装置

Also Published As

Publication number Publication date
CN111445914B (zh) 2023-10-17
CN111445914A (zh) 2020-07-24

Similar Documents

Publication Publication Date Title
US11132984B2 (en) Automatic multi-channel music mix from multiple audio stems
CN103649706B (zh) 三维音频音轨的编码及再现
KR102124547B1 (ko) 인코딩된 오디오 메타데이터-기반 등화
JP5467105B2 (ja) オブジェクトベースのメタデータを用いてオーディオ出力信号を生成するための装置および方法
Emmerson et al. Electro-acoustic music
EP2205007A1 (en) Method and apparatus for three-dimensional acoustic field encoding and optimal reconstruction
US20110112672A1 (en) Systems and Methods of Constructing a Library of Audio Segments of a Song and an Interface for Generating a User-Defined Rendition of the Song
TW201238279A (en) Semantic audio track mixer
BRPI0715312B1 (pt) Aparelhagem e método para transformação de parâmetros multicanais
KR20140028094A (ko) 다객체 오디오 신호의 부가정보 비트스트림 생성 방법 및 장치
WO2021190039A1 (zh) 可拆解和再编辑音频信号的处理方法及装置
JP2019533195A (ja) 分離されたオブジェクトを使用してオーディオ信号を編集する方法および関連装置
Kalliris et al. Media management, sound editing and mixing
JP2008505430A (ja) データサポート上で音響データを記録、再生及び操作する方法
CN108550369A (zh) 一种可变长度的全景声信号编解码方法
Marchand et al. DReaM: a novel system for joint source separation and multi-track coding
Mores Music studio technology
WO2021203753A1 (zh) 音频信号的增量编码方法及装置
Barboza et al. Towards Best Practices in Spatial Audio Post Production: A Case Study of Brazilian Popular Music
AU2013200578B2 (en) Apparatus and method for generating audio output signals using object based metadata
Malyshev Sound production for 360 videos: in a live music performance case study
Werner et al. Guitars with Ambisonic Spatial Performance (GASP): An immersive guitar system
Lopes INSTRUMENT POSITION IN IMMERSIVE AUDIO: A STUDY ON GOOD PRACTICES AND COMPARISON WITH STEREO APPROACHES
Laine Cinematic music creation in Dolby Atmos: producing and mixing contemporary cinematic music in immersive audio
Mores 12. Music Studio Studio Technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927735

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927735

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20927735

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.04.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20927735

Country of ref document: EP

Kind code of ref document: A1