CN111508509A - Sound quality processing system and method based on deep learning - Google Patents

Sound quality processing system and method based on deep learning Download PDF

Info

Publication number
CN111508509A
CN111508509A CN202010254598.2A CN202010254598A CN111508509A CN 111508509 A CN111508509 A CN 111508509A CN 202010254598 A CN202010254598 A CN 202010254598A CN 111508509 A CN111508509 A CN 111508509A
Authority
CN
China
Prior art keywords
deep learning
sound
gate
data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010254598.2A
Other languages
Chinese (zh)
Inventor
吴开钢
詹启军
林榕
郑广平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Unionman Technology Co Ltd
Original Assignee
Guangdong Unionman Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Unionman Technology Co Ltd filed Critical Guangdong Unionman Technology Co Ltd
Priority to CN202010254598.2A priority Critical patent/CN111508509A/en
Publication of CN111508509A publication Critical patent/CN111508509A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to the technical field of sound processing methods, in particular to a sound quality processing system based on deep learning and a method thereof. The invention aims to provide a sound quality processing system based on deep learning and a method thereof, and the technical scheme provided by the invention is adopted to solve the technical problem that the essence of sound characteristics cannot be understood in the existing scheme of sound reconstruction based on a method of artificial filling or interpolation data.

Description

Sound quality processing system and method based on deep learning
Technical Field
The invention relates to the technical field of sound processing methods, in particular to a sound quality processing system and method based on deep learning.
Technical Field
With the pursuit of higher and higher sound quality and the increasingly advanced audio sampling technology, the quality of the current lossy audio is far from meeting the demand. How to achieve the best sound reduction effect under the limited storage and transmission space condition becomes the core of sound quality processing technology. In the prior lossy compression method, such as MP3, Advanced Audio Coding (AAC), etc., although the code rate can be reduced primarily by the artificial digital signal processing algorithm method, the basic sound signal can be restored, thus being widely applied.
However, the existing scheme for reconstructing the data based on artificial filling or interpolation is not ideal in effect, and the fundamental reason is that the scheme is realized based on too rough subjective perception of human and cannot realize essential understanding of sound.
Disclosure of Invention
The invention aims to provide a sound quality processing system based on deep learning and a method thereof, and the technical scheme provided by the invention is adopted to solve the technical problem that the essence of sound characteristics cannot be understood in the existing scheme of sound reconstruction based on a method of artificial filling or interpolation data.
In order to solve the above technical problem, an aspect of the present invention provides a sound quality processing system based on deep learning, including a sound source sampling input module, a deep learning reconstruction network, and a sound source processing output module;
the sound source sampling input module is used for sampling a lossless audio sample and a lossy audio sample to obtain original naked data;
the deep learning reconstruction network extracts the characteristics of the original bare data, classifies the characteristics, respectively reconstructs the frequency spectrum of each type of characteristics, and then restores the time domain to obtain time domain waveform data;
and the sound source processing output module outputs time domain waveform data obtained by the deep learning reconstruction network.
Preferably, in the audio source sampling input module, the lossy audio samples are obtained from the lossless audio samples by short-time fourier transform.
Preferably, the deep learning reconstruction network includes an input layer and an output layer, the raw bare data is input to the input layer, and the characteristics of the raw bare data are targets of the output layer.
Preferably, the deep learning reconstruction network is formed by sequentially connecting at least three L STM networks, a plurality of Dropout layers, at least two Dence layers and a Softmax classifier, and one Dropout layer is connected between two adjacent L STM networks and two adjacent Dence layers.
Based on the sound quality processing system, another aspect of the present invention further provides a sound quality processing method, including the following steps:
s100, sampling a lossless audio sample and a lossy audio sample to obtain original bare data;
s200, extracting the characteristics of the original naked data and then classifying;
s300, respectively carrying out frequency spectrum reconstruction on each type of characteristics;
s400, time domain reduction is carried out on the frequency spectrum reconstruction characteristics to obtain time domain waveform data and the time domain waveform data are output.
Preferably, in step S200, before classifying the features, the storing process of the features includes:
l after the STM network extracts the characteristics of the original bare data;
the extracted features are transmitted between a reset gate and an update gate between units of each hidden layer of the Dropout layer;
the delivery process controls the degree of memorization and forgetting of the previous and current sound characteristics.
Preferably, the reset gate and the update gate are variation controllable gates of a forgetting gate, an input gate, a candidate gate and an output gate.
Preferably, in step S200, the feature having completed the memory processing is classified in the degree layer following the combination of the sound features.
Preferably, in step S300, a spectrum reconstruction calculation is performed on each class of features by the Softmax classifier.
Preferably, in an L STM network, the partial self output is injected into the audio input frame.
From the above, the following beneficial effects can be obtained by applying the invention: the invention processes each sound source scene by a dynamic method for a poor audio source by utilizing a neural network reconstruction algorithm based on deep learning, can process from the characteristic essence of each aspect of sound, reconstructs an effect close to lossless restoration, and enables the output tone quality to be close to the lossless level.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a block diagram of a deep learning based sound quality processing system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an STM network gate structure of an L system for processing sound quality based on deep learning according to an embodiment of the present invention;
fig. 3 is a diagram of a neural network architecture of a deep learning based sound quality processing system according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior lossy compression method, the reconstruction scheme based on the method of artificial filling or interpolation data has poor effect, and the fundamental reason is that the method is realized based on the too rough subjective perception of human and cannot essentially understand the sound.
Referring to fig. 1-3, to solve the above technical problem, the present embodiment provides a sound quality processing system based on deep learning, which includes a sound source sampling input module, a deep learning reconstruction network, and a sound source processing output module.
The system comprises a sound source sampling input module, a lossless audio sample acquisition module, a lossy audio sample acquisition module and a lossless audio sample acquisition module, wherein the sound source sampling input module is used for sampling the lossless audio sample and the lossy audio sample to obtain original naked data;
the deep learning reconstruction network is used for extracting the characteristics of original bare data, classifying the extracted characteristics, respectively carrying out frequency spectrum reconstruction on each type of characteristics, and then carrying out time domain reduction to obtain time domain waveform data;
and the sound source processing output module outputs time domain waveform data obtained by the deep learning reconstruction network.
In the invention, a deep learning reconstruction Network adopts an L STM Network to realize deep learning, specifically, a L STM (L ong short Memory Network), a long-time Memory Network and a variant of RNN (radio Network node), wherein the L STM Network is proposed for overcoming the problem that the RNN cannot reasonably depend on long distance, and compared with the common RNN, &lttTtranslation = L "&gtTL &ltt/T &gtTSTM, the deep learning reconstruction Network can better perform in a longer sequence.
L STM repetitive network module realizes three gate calculation, namely forgetting gate, input gate and output gate, each gate is responsible for different things, wherein the forgetting gate is responsible for deciding how many unit states from previous time to current time are reserved, the input gate is responsible for deciding how many unit states from current time to current time are reserved, and the output gate is responsible for deciding how many outputs are from current time unit states.
To this end in an L STM network, each L STM contains three inputs, namely the cell state at the upper time, the output of the upper time L STM, and the current time input.
Based on the L STM network, the deep learning reconstruction network of the sound quality processing system based on deep learning provided by the embodiment of the invention includes an input layer and an output layer, original bare data is input to the input layer, and characteristics of the original bare data are targets of the output layer.
In the connection structure, the deep learning reconstruction network is formed by sequentially connecting at least three L STM networks, a plurality of Dropout layers, at least two Dence layers and a Softmax classifier, and one Dropout layer is connected between two adjacent L STM networks and two adjacent Dence layers.
Based on the sound quality processing system, another aspect of the present invention further provides a sound quality processing method, including the following steps:
s100, sampling the lossless audio sample and the lossy audio sample to obtain original naked data.
In this step, the excitation samples are sampled by an excitation sample input module, wherein the lossy audio samples are derived from the lossless audio samples by short-time fourier transform.
And S200, extracting the characteristics of the original naked data and then classifying.
Before classifying the features, performing memory processing on the features, wherein the memory processing comprises the following steps:
l STM network extracts the characteristics of original bare data, the extracted characteristics are transmitted between reset gates and update gates between units of each hidden layer of Dropout layer, and the memory and forgetting degree of the sound characteristics before process control and the current sound characteristics are transmitted.
The method is realized by a deep learning reconstruction network, and the module is in a chain form with a repeated neural network module and directly processes an input source.
Respectively extracting the characteristics of a lossy audio sample and a lossless audio sample input by sound source sampling to respectively obtain the characteristics of the lossy audio sample and the characteristics of the lossless audio sample. And taking the sampled original bare data as the input of the input layer of the audio reconstruction neural network, taking the obtained characteristics of the original bare data as the target of the output layer of the original bare data reconstruction neural network, and recursively adjusting training parameters to train the audio reconstruction neural network model.
The deep neural network uses L STM network (L ongshort term memory), the common RNN recurrent neural network adds memory units in each neural unit of the hidden layer, thereby the characteristic information of the memory on the time sequence of the sound signal is controllable, the memory and forgetting degree of the previous sound characteristic and the current sound characteristic can be controlled by a variety reset gate and an update gate of a plurality of controllable gates (forgetting gate, input gate, candidate gate and output gate) when the characteristic is transmitted in each unit of the hidden layer.
An L STM variant network used by the embodiment of the invention is shown in FIG. 2, wherein r represents a reset gate, z represents an update gate, the characteristics represented by the r gate will determine whether to forget the previous state, which is equivalent to a forget gate and a transfer gate, when rt → 0, the previous state h of t will be forgotten, the parameter of the hidden state h (t) will be cleared and set as the currently input signal, each result of tanh output is a real number between 0 and 1, representing the weight (or duty ratio) for letting the corresponding signal pass.
The neural network architecture is as shown in fig. 3, the network repeatedly operates 512 neurons on the input layer twice, and the obtained result is discarded by 30% each time to avoid the phenomenon of overfitting of the network. And classifying according to the combination of the sound features by using a fully-connected Dence layer, and reducing the influence of feature repeated positions on feature classification as much as possible.
And S300, respectively carrying out spectrum reconstruction on each type of characteristics.
And the spectrum reconstruction is completed through a Softmax multi-class feature classifier, and the Softmax multi-class feature classifier carries out spectrum reconstruction calculation on each class of features respectively.
S400, time domain reduction is carried out on the frequency spectrum reconstruction characteristics to obtain time domain waveform data and the time domain waveform data are output.
And performing time domain reduction on the result obtained in the step S300 to obtain the reconstructed audio stream.
In the processing system and the method thereof, the technical problem of processing errors exists, therefore, in the embodiment of the invention, the network self feedback is added, the useful self output of the L STM variant network part is poured into the audio input frame, so that the audio input frame forms the self feedback, and the problem of error disappearance is overcome.
In summary, the embodiment of the invention processes each sound source scene by a dynamic method for a poor audio source by using a neural network reconstruction algorithm based on deep learning, and can process the characteristic essence of each aspect of sound to reconstruct the effect close to lossless restoration and enable the output tone quality to be close to the effect of lossless level.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (10)

1. A sound quality processing system based on deep learning, characterized in that: the system comprises a sound source sampling input module, a deep learning reconstruction network and a sound source processing output module;
the sound source sampling input module is used for sampling a lossless audio sample and a lossy audio sample to obtain original naked data;
the deep learning reconstruction network extracts the characteristics of the original bare data, classifies the characteristics, respectively reconstructs the frequency spectrum of each type of characteristics, and then restores the time domain to obtain time domain waveform data;
and the sound source processing output module outputs time domain waveform data obtained by the deep learning reconstruction network.
2. The sound quality processing system according to claim 1, wherein: in the audio source sampling input module, the lossy audio samples are obtained from the lossless audio samples by short-time fourier transform.
3. The sound quality processing system according to claim 2, wherein: the deep learning reconstruction network comprises an input layer and an output layer, the original bare data is input to the input layer, and the characteristics of the original bare data are targets of the output layer.
4. The sound quality processing system according to claim 3, wherein the deep learning reconstruction network is composed of at least three L STM networks, several Dropout layers, at least two Dence layers and a Softmax classifier, and a Dropout layer is connected between two adjacent L STM networks and two adjacent Dence layers.
5. A processing method based on the sound quality processing system of claim 4, characterized in that: the method comprises the following steps:
s100, sampling a lossless audio sample and a lossy audio sample to obtain original bare data;
s200, extracting the characteristics of the original naked data and then classifying;
s300, respectively carrying out frequency spectrum reconstruction on each type of characteristics;
s400, time domain reduction is carried out on the frequency spectrum reconstruction characteristics to obtain time domain waveform data and the time domain waveform data are output.
6. The processing method according to claim 5, characterized in that: in step S200, before classifying the features, the storing process of the features includes:
l after the STM network extracts the characteristics of the original bare data;
the extracted features are transmitted between a reset gate and an update gate between units of each hidden layer of the Dropout layer;
the delivery process controls the degree of memorization and forgetting of the previous and current sound characteristics.
7. The processing method according to claim 6, characterized in that: the reset gate and the update gate are variant controllable gates of a forgetting gate, an input gate, a candidate gate and an output gate.
8. The processing method according to claim 7, characterized in that: in step S200, the feature having completed the memory processing is classified in the degree layer following the combination of the sound features.
9. The processing method according to claim 8, characterized in that: in step S300, spectrum reconstruction calculation is performed by the Softmax classifier for each class of features, respectively.
10. The processing method according to claim 9, wherein in the L STM network, the partial self output is injected into the audio input frame.
CN202010254598.2A 2020-04-02 2020-04-02 Sound quality processing system and method based on deep learning Pending CN111508509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010254598.2A CN111508509A (en) 2020-04-02 2020-04-02 Sound quality processing system and method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010254598.2A CN111508509A (en) 2020-04-02 2020-04-02 Sound quality processing system and method based on deep learning

Publications (1)

Publication Number Publication Date
CN111508509A true CN111508509A (en) 2020-08-07

Family

ID=71877456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010254598.2A Pending CN111508509A (en) 2020-04-02 2020-04-02 Sound quality processing system and method based on deep learning

Country Status (1)

Country Link
CN (1) CN111508509A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114400014A (en) * 2021-12-09 2022-04-26 慧之安信息技术股份有限公司 Audio code stream compression method and device based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293288A (en) * 2017-06-09 2017-10-24 清华大学 A kind of residual error shot and long term remembers the acoustic model modeling method of Recognition with Recurrent Neural Network
CN108538283A (en) * 2018-03-15 2018-09-14 上海电力学院 A kind of conversion method by lip characteristics of image to speech coding parameters
CN108882111A (en) * 2018-06-01 2018-11-23 四川斐讯信息技术有限公司 A kind of exchange method and system based on intelligent sound box
CN109036375A (en) * 2018-07-25 2018-12-18 腾讯科技(深圳)有限公司 Phoneme synthesizing method, model training method, device and computer equipment
CN109147805A (en) * 2018-06-05 2019-01-04 安克创新科技股份有限公司 Audio sound quality enhancing based on deep learning
CN109376848A (en) * 2018-09-01 2019-02-22 哈尔滨工程大学 A kind of door control unit neural network of simplification
CN109859767A (en) * 2019-03-06 2019-06-07 哈尔滨工业大学(深圳) A kind of environment self-adaption neural network noise-reduction method, system and storage medium for digital deaf-aid

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293288A (en) * 2017-06-09 2017-10-24 清华大学 A kind of residual error shot and long term remembers the acoustic model modeling method of Recognition with Recurrent Neural Network
CN108538283A (en) * 2018-03-15 2018-09-14 上海电力学院 A kind of conversion method by lip characteristics of image to speech coding parameters
CN108882111A (en) * 2018-06-01 2018-11-23 四川斐讯信息技术有限公司 A kind of exchange method and system based on intelligent sound box
CN109147805A (en) * 2018-06-05 2019-01-04 安克创新科技股份有限公司 Audio sound quality enhancing based on deep learning
CN109036375A (en) * 2018-07-25 2018-12-18 腾讯科技(深圳)有限公司 Phoneme synthesizing method, model training method, device and computer equipment
CN109376848A (en) * 2018-09-01 2019-02-22 哈尔滨工程大学 A kind of door control unit neural network of simplification
CN109859767A (en) * 2019-03-06 2019-06-07 哈尔滨工业大学(深圳) A kind of environment self-adaption neural network noise-reduction method, system and storage medium for digital deaf-aid

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114400014A (en) * 2021-12-09 2022-04-26 慧之安信息技术股份有限公司 Audio code stream compression method and device based on deep learning

Similar Documents

Publication Publication Date Title
CN110136731B (en) Cavity causal convolution generation confrontation network end-to-end bone conduction voice blind enhancement method
CN113113030B (en) High-dimensional damaged data wireless transmission method based on noise reduction self-encoder
CN113163203B (en) Deep learning feature compression and decompression method, system and terminal
Guzhov et al. Esresne (x) t-fbsp: Learning robust time-frequency transformation of audio
CN115602152B (en) Voice enhancement method based on multi-stage attention network
CN111966998A (en) Password generation method, system, medium, and apparatus based on variational automatic encoder
CN111723874A (en) Sound scene classification method based on width and depth neural network
CN115470827A (en) Antagonistic electrocardiosignal noise reduction method based on self-supervision learning and twin network
CN101770560A (en) Information processing method and device for simulating biological neuron information processing mechanism
CN108959388A (en) information generating method and device
CN111508509A (en) Sound quality processing system and method based on deep learning
CN115630742A (en) Weather prediction method and system based on self-supervision pre-training
CN112005300B (en) Voice signal processing method and mobile device
CN113409803B (en) Voice signal processing method, device, storage medium and equipment
CN114219027A (en) Lightweight time series prediction method based on discrete wavelet transform
CN114630207B (en) Multi-sensing-node sensing data collection method based on noise reduction self-encoder
CN112819143B (en) Working memory computing system and method based on graph neural network
CN111935762B (en) Distribution network fault diagnosis method and system based on EWT and CNN under 5G load-bearing network
Wei Application of hybrid back propagation neural network in image compression
CN112669857B (en) Voice processing method, device and equipment
Faundez-Zanuy Nonlinear speech processing: Overview and possibilities in speech coding
Lv et al. A universal PCA for image compression
Dibazar et al. Speech recognition based on fundamental functional principles of the brain
Kaouri et al. Enhancement of coded speech signals using artificial neural network techniques
Aillet et al. [Re] Variational Neural Cellular Automata

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination