CN107578784B - Method and device for extracting target source from audio - Google Patents

Method and device for extracting target source from audio Download PDF

Info

Publication number
CN107578784B
CN107578784B CN201710816430.4A CN201710816430A CN107578784B CN 107578784 B CN107578784 B CN 107578784B CN 201710816430 A CN201710816430 A CN 201710816430A CN 107578784 B CN107578784 B CN 107578784B
Authority
CN
China
Prior art keywords
signal
path
target source
virtual
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710816430.4A
Other languages
Chinese (zh)
Other versions
CN107578784A (en
Inventor
郑羲光
尚梦宸
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yinman Technology Co.,Ltd.
Original Assignee
Yinkman Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yinkman Beijing Technology Co ltd filed Critical Yinkman Beijing Technology Co ltd
Priority to CN201710816430.4A priority Critical patent/CN107578784B/en
Publication of CN107578784A publication Critical patent/CN107578784A/en
Application granted granted Critical
Publication of CN107578784B publication Critical patent/CN107578784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for extracting a target source from audio. The method comprises the following steps: carrying out time-frequency transformation on the collected audio signals frame by frame, transforming time-domain signals into frequency-domain signals, and dividing the frequency-domain signals by using a window function to form two paths of signals; traversing and calculating virtual included angles of virtual sources corresponding to frequency points of two paths of signals of each frame of frequency domain signal under a given frequency; comparing the virtual included angle with a preset angle threshold value, taking a signal as a target source signal according to a comparison result, and extracting a frequency domain signal of the target source signal for storage; and converting the stored frequency domain signal of the target source signal into a time domain signal by using time-frequency inverse conversion, and outputting the target source time domain signal. The invention realizes the separation of the target source signal from the audio signal.

Description

Method and device for extracting target source from audio
Technical Field
The invention relates to the technical field of audio signal processing, in particular to a method and a device for extracting a target source from audio.
Background
Most of singing scoring systems in the current KTV market score according to singing tone fluctuation or volume, and cannot really score according to the voice of a singer, so that the low-precision scoring system cannot meet the requirements of consumers more and more. The score of a person who tries to sing very nice and a person who sings not so nice is the same, or the score of a person who sings badly is rather high due to the relation of the volume, so that the score enthusiasm of a part of people is greatly reduced. Therefore, the improvement of the KTV scoring system is very important, in order to improve the KTV scoring system and enable the scoring to be more accurate, the voice of the person singing in the song can be compared with the voice of the person singing in the KTV by the consumer, and the scoring is higher when the coincidence degree of the voice and the person is higher. The first step of doing so is to extract the original vocal sound in the song from the accompaniment and the vocal sound of the song alone, however, how to extract the original vocal sound from the song audio frequency containing the vocal sound and the accompaniment well becomes a difficult problem.
Disclosure of Invention
The invention aims to provide a method and a device for extracting a target source from audio aiming at the technical defects in the prior art.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a method of extracting a target source from audio, comprising the steps of:
carrying out time-frequency transformation on the collected audio signals frame by frame, transforming time domain signals into frequency domain signals, and segmenting the frequency domain signals by using a window function to form a first path of signals and a second path of signals;
traversing and calculating virtual included angles of virtual sources corresponding to frequency points of a first path of signals and a second path of signals of each frame of frequency domain signals under a given frequency;
comparing the virtual included angle with a preset angle threshold value, taking the first path of signal or the second path of signal as a target source signal according to a comparison result, and extracting a frequency domain signal of the target source signal for storage;
and converting the stored frequency domain signal of the target source signal into a time domain signal by using time-frequency inverse conversion, and outputting the target source time domain signal.
Another aspect of the present invention is to provide an apparatus for extracting a target source from audio, including:
the time domain and frequency domain conversion and division module is used for carrying out time frequency conversion on the collected audio signals frame by frame, converting the time domain signals into frequency domain signals, and dividing the frequency domain signals by using a window function to form a first path of signals and a second path of signals;
the virtual included angle calculation module is used for calculating the virtual included angle of a virtual source corresponding to each frequency point of a first path of signal and a second path of signal of each frame of frequency domain signal under a given frequency in a traversing manner;
the target source signal storage module is used for comparing the virtual included angle with a preset angle threshold value, taking the first path of signal or the second path of signal as a target source signal according to a comparison result and extracting a frequency domain signal of the target source signal for storage;
and the frequency domain time domain conversion output module is used for converting the stored frequency domain signal of the target source signal into a time domain signal by utilizing time frequency inverse conversion and outputting the target source time domain signal.
The method comprises the steps of respectively forming a first path of signal and a second path of signal after performing time domain and frequency domain conversion on audio signals to be separated frame by frame, then separating and storing target source signals meeting requirements by traversing virtual included angles of virtual sources corresponding to frequency points of the first path of signal and the second path of signal of each frame of frequency domain signal under frequency calculation, and outputting the target source signals after conversion from frequency domain to time domain, so that the target source signals are separated and extracted from the audio signals, and the target source signals are convenient to process and use subsequently.
Drawings
FIG. 1 is a flow diagram of a method of extracting a target source from audio;
FIG. 2 is a schematic diagram of the calculation of virtual included angles of virtual sources;
fig. 3 is a schematic diagram of a structure of an apparatus for extracting a target source from audio.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a method for extracting a target source from audio includes the steps of:
carrying out time-frequency transformation on the collected audio signals frame by frame, transforming time domain signals into frequency domain signals, and segmenting the frequency domain signals by using a window function to form a first path of signals a and a second path of signals b;
traversing and calculating a virtual included angle theta of a virtual source corresponding to each frequency point of a first path of signal and a second path of signal of each frame frequency domain signal under a given frequency kab(k);
Comparing the virtual included angle thetaab(k) According to the comparison result, the first path of signal or the second path of signal is used as a target source signal, and a frequency domain signal of the target source signal is extracted and stored;
and converting the stored frequency domain signal of the target source signal into a time domain signal by using time-frequency inverse conversion, and outputting the target source time domain signal.
The method can realize that the voice of the song in the KTV system and the voice (target source) in the mixed audio frequency of the accompaniment are separated and stored separately and then output, thus providing a basis for accurately evaluating the vacuum singing level of the singer in a subsequent KTV scoring system, when the voice is extracted from the voice of the song in the KTV system and the mixed audio frequency of the accompaniment, if the first path of signal is taken as the accompaniment signal and the second path of signal is taken as the voice signal, the virtual included angle of the virtual source is calculated by taking the second path of signal as the voice signal, then the size of the virtual included angle and the preset angle threshold value are compared, the comparison and judgment are carried out by setting a preset angle threshold value according to the difference of the virtual included angle of the virtual source of the voice and the accompaniment signals, the signal corresponding to the virtual source meeting the requirement is taken as the voice signal to be stored separately according to the comparison and judgment result, the same mode is adopted for each frame of audio signal in turn, it is possible to realize separate storage of the vocal of the song from the mixed audio signal of the vocal and the accompaniment.
The predetermined angle threshold is empirically determined, and may be 5 degrees, 3 degrees or other angles, and the window function may select the same size window function or different sizes window functions as required to reduce the spectral energy leakage when the window function divides the frequency domain signal as much as possible. And when the stored frequency domain signal of the target source signal is converted into a time domain signal by using inverse time-frequency transform (ISTFT) and the target source time domain signal is output, the corresponding window function size during interception is correspondingly adopted for reduction.
The calculation mode of the virtual included angle of the virtual source is as follows:
Figure BDA0001405192700000041
in the formula, thetaab(k) A virtual included angle A of a virtual source corresponding to each frequency point of a first path of signal a and a second path of signal b presented by the frequency k is showna(k) And Ab(k) Respectively representing the amplitudes of the frequencies k presented by the first path signal a and the second path signal b,
Figure BDA0001405192700000042
indicating a first letterAnd the included angle between the signal a and the second path of signal b.
Due to the sparsity principle of audio signals, on the same frequency at the same time, the output of one time frequency point of the first path of signal a and the output of one time frequency point of the second path of signal b are always far larger than the other one; the time-frequency point is the amplitude of the signal expressed by the signal in dB in a coordinate system with the Y axis as the frequency (HZ) and the X axis as the time. As shown in the following table:
frequency (HZ) 20 40 60 ……
The first path of signal a a1 a2 a3 ……
First path signal b b1 b2 b3 ……
According to the sparsity principle, in the comparison between the first signal a and the second signal b, a1 represents the output of a time frequency point of the time frequency transform (STFT) of the first signal a at the frequency of 20HZ, b1 represents the output of a time frequency point of the time frequency transform (STFT) of the second signal b at the frequency of 20HZ, and a1 and b1 should be complex numbers. There is always | a1| > | b1|, and b1 ≈ 0, or | a1| < | b1|, and a1 ≈ 0, the same applies hereinafter.
Therefore, by using the sparsity principle of the signals, the two signals can be judged and distinguished by using the virtual included angle of the virtual source, and the required target signals are separated and stored.
Referring specifically to FIG. 2, FIG. 2 illustrates how a virtual angle θ between virtual sources 40 representing amplitudes of a first signal a and a second signal b is calculatedab,Aa,AbThe amplitudes of the first signal a and the second signal b are the included angle of the two signals
Figure BDA0001405192700000051
From-30 to 30, the two speakers in FIG. 2, the left speaker 10 and the right speaker 20 transmit audio signals to the listener 30 located at the middle of the two speakers, so that the frequency k of the transmitted sound from the two speakers reaching the ears of the listener 30 can be used by the virtual source 40 to represent the amplitude A of the first signal a and the second signal ba(k) And Ab(k)。
The amplitude A of the first path signal a and the second path signal b is represented by the virtual source of the frequency k represented by the two paths of signals obtained after time-frequency transformationa(k) And Ab(k) Of course, the invention is not limited to the virtual source of the frequency k presented by the first signal a and the second signal b to present the amplitude a of the two signalsa(k) And Ab(k) Or a plurality of signals, if a plurality of signals (more than two), there are known signals (including the signal of the target source) and other signals aiAdding: sum (| A)iI)=∑iIAiI=A1+…+AI(1 ≦ I ≦ I.) the other signal formed is processed with the virtual source presented, in fact also with the virtual source presented by the two signal processes.
Thus, the virtual angle calculation of the virtual source can be performed according to the aboveFormula, calculating the virtual angle theta of the virtual source at a given frequency kab(k) Is a positive or negative value: the virtual source is represented by the amplitude and the virtual included angle of the selected first path of signal a and the second path of signal b as follows: { Aa(k),Ab(k),θab}。θabThat is, Side information (Side information), that is, a virtual included angle between the second path of signal a and the second path of signal b, that is, a virtual source included angle, and the original signal may be analyzed with the assistance of the Side information (Side information) to determine whether the original signal is a target source, that is, a voice.
After calculating the virtual source angle, the signal angle of two given signals
Figure BDA0001405192700000061
In the range (-30 to 30),
Figure BDA0001405192700000062
fixed size), in a certain frequency in the same frame, if the output of the time frequency point of the first path of signal a is greater than that of the second path of signal b (assuming that the first path of signal a is accompaniment and the second path of signal b is voice), the virtual included angle θ is formedabThe signal will be inclined to the first path of signal a, otherwise, the signal will be inclined to the second path of signal b.
For convenience of judgment, a predetermined angle threshold may be empirically determined between two signal angles, such as zero degrees, when the virtual angle θ is larger than the predetermined angle thresholdabIs above the zero degree angle threshold, it is classified as vocal, as shown in fig. 2. By using the method, each frame on the traversal frequency is classified, so that the target source can be separated from the mixed audio; and finally, converting the stored frequency domain signal into a time domain by using time-frequency inverse transformation, and outputting a target source signal, namely human voice.
Specifically, when judging whether the voice is a voice according to the virtual included angle, the method is performed in a manner that the virtual included angle theta of the virtual source isab(k) And when the frequency domain signal is larger than the preset angle threshold value, taking the first path of signal or the second path of signal corresponding to the virtual source as a target source signal, and then extracting the frequency domain signal of the target source signal for storage.
WhereinThe calculation method for extracting the target source signal is as follows (assuming that the first path signal a is the target source signal, corresponding to aa(k) Containing the target source signal to be extracted):
S(k)=Aa(k)·M(k),
wherein the content of the first and second substances,
Figure BDA0001405192700000063
m (k) extracting a vector for the target source signal; t is a given threshold, S (k) is a target source signal.
In the time domain and frequency domain conversion, the methods include, but are not limited to, fourier transform, wavelet transform, MDCT transform, etc.
The present invention also aims to provide an apparatus for extracting a target source from audio, as shown in fig. 3, including:
the time domain and frequency domain conversion and division module is used for carrying out time frequency conversion on the collected audio signals frame by frame, converting the time domain signals into frequency domain signals, and dividing the frequency domain signals by using a window function to form a first path of signals and a second path of signals;
a virtual included angle calculation module, configured to traverse and calculate a virtual included angle θ between virtual sources corresponding to frequency points of a first path of signal and a second path of signal of each frame of frequency domain signal under a given frequencyab(k);
A target source signal storage module for comparing the virtual included angle thetaab(k) According to the comparison result, the first path of signal or the second path of signal is used as a target source signal, and a frequency domain signal of the target source signal is extracted and stored;
and the frequency domain time domain conversion output module is used for converting the stored frequency domain signal of the target source signal into a time domain signal by utilizing time frequency inverse conversion and outputting the target source time domain signal.
The device can realize that the voice of the song in the KTV system and the voice (target source) in the accompaniment mixed audio frequency are separated and stored separately and then output, thus providing a basis for accurately evaluating the vacuum singing level of the singer in the KTV system, when the device is used for extracting the voice of the song in the KTV system and the voice in the accompaniment mixed audio frequency, for example, the first path of signal is taken as an accompaniment signal, the second path of signal is taken as a voice signal, the virtual included angle of the virtual source is calculated, then the size of the virtual included angle and the preset angle threshold value are compared, the comparison and judgment are carried out by setting a preset angle threshold value according to the difference of the virtual included angle of the voice and accompaniment signal virtual source, according to the comparison and judgment result, the signal corresponding to the virtual source which meets the requirement is taken as the voice signal to be stored separately, the same mode is adopted for each frame of audio signal in turn, the original vocal sound in the song can be separated and stored from the mixed audio signal of the vocal sound and the accompaniment.
The predetermined angle threshold is empirically determined and may be 5 degrees, 3 degrees, or other angles. The window function can select the same size window function or different sizes of window functions according to requirements so as to reduce the spectrum energy leakage when the window function divides the frequency domain signal as much as possible. When the stored frequency domain signal of the target source signal is converted into a time domain signal by using time-frequency inverse transformation (such as short-time inverse Fourier transform (ISTFT)), and the target source time domain signal is output, the corresponding window function size during interception is correspondingly adopted for reduction.
The calculation mode of the virtual included angle of the virtual source is as follows:
Figure BDA0001405192700000081
θab(k) virtual angle, A, representing a virtual sourcea(k) And Ab(k) Respectively representing the amplitudes of the frequencies k presented by the first path signal and the second path signal,
Figure BDA0001405192700000082
and the included angle between the first path of signal and the second path of signal is shown.
Specifically, when it is determined whether the sound is a voice according to the virtual angle, the method is performed in such a manner that the virtual angle θ of the virtual source isab(k) When the angle is larger than the preset angle threshold value, the virtual source is usedAnd taking the corresponding first path of signal or the second path of signal as a target source signal, and extracting a frequency domain signal of the target source signal for storage.
For the description of the virtual source and the virtual included angle, please refer to the above description of how to calculate the virtual included angle of the virtual source representing the amplitudes of the first signal a and the second signal b and fig. 2.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A method of extracting a target source from audio, comprising the steps of:
carrying out time-frequency transformation on the collected audio signals frame by frame, transforming time domain signals into frequency domain signals, and segmenting the frequency domain signals by using a window function to form a first path of signals and a second path of signals;
traversing and calculating virtual included angles of virtual sources corresponding to frequency points of a first path of signals and a second path of signals of each frame of frequency domain signals under a given frequency;
comparing the virtual included angle with a preset angle threshold value, taking the first path of signal or the second path of signal as a target source signal according to a comparison result, and extracting a frequency domain signal of the target source signal for storage;
converting the stored frequency domain signal of the target source signal into a time domain signal by using time-frequency conversion inverse conversion, and outputting the target source time domain signal;
the calculation mode of the virtual included angle of the virtual source is as follows:
Figure FDA0002711228480000011
θab(k) representing the virtual included angle of the virtual source corresponding to each frequency point of the first path of signal a and the second path of signal b presented by the frequency k, Aa(k) And Ab(k) Are respectively provided withRepresenting the amplitudes of the frequencies k presented by the first signal a and the second signal b,
Figure FDA0002711228480000013
and the included angle between the first path of signal a and the second path of signal b is shown.
2. The method as claimed in claim 1, wherein when the virtual angle of the virtual source is greater than a predetermined angle threshold, the first or second channel of signal corresponding to the virtual source is regarded as the target source signal, and then the frequency domain signal of the target source signal is extracted and stored.
3. The method as claimed in claim 1, wherein if the first signal a is the target source signal, the calculation method for extracting the target source signal is as follows:
S(k)=Aa(k)·M(k),
wherein the content of the first and second substances,
Figure FDA0002711228480000012
m (k) extracting a vector for the target source signal; t is a given threshold, S (k) is a target source signal.
4. An apparatus for extracting a target source from audio, comprising:
the time domain and frequency domain conversion and segmentation module is used for carrying out time-frequency transformation on the acquired audio signals frame by frame, transforming the time domain signals into frequency domain signals and forming a first path of signals and a second path of signals;
the virtual included angle calculation module is used for calculating the virtual included angle of a virtual source corresponding to each frequency point of a first path of signal and a second path of signal of each frame of frequency domain signal under a given frequency in a traversing manner;
the target source signal storage module is used for comparing the virtual included angle with a preset angle threshold value, taking the first path of signal or the second path of signal as a target source signal according to a comparison result and extracting a frequency domain signal of the target source signal for storage;
the frequency domain time domain conversion output module is used for converting the stored frequency domain signal of the target source signal into a time domain signal by utilizing time frequency conversion inverse conversion and outputting the target source time domain signal;
the calculation mode of the virtual included angle of the virtual source is as follows:
Figure FDA0002711228480000021
θab(k) representing the virtual included angle of the virtual source corresponding to each frequency point of the first path of signal a and the second path of signal b presented by the frequency k, Aa(k) And Ab(k) Respectively representing the amplitudes of the frequencies k presented by the first path signal a and the second path signal b,
Figure FDA0002711228480000022
and the included angle between the first path of signal a and the second path of signal b is shown.
5. The apparatus for extracting a target source from audio according to claim 4, wherein when the virtual angle of the virtual source is greater than a predetermined angle threshold, the first path of signal or the second path of signal corresponding to the virtual source is regarded as the target source signal, and then the frequency domain signal of the target source signal is extracted and stored.
6. The apparatus for extracting a target source from an audio signal as claimed in claim 4, wherein if the first signal a is a target source signal, the calculation method for extracting the target source signal is as follows:
S(k)=Aa(k)·M(k),
wherein the content of the first and second substances,
Figure FDA0002711228480000023
m (k) extracting a vector for the target source signal; t is a given threshold, S (k) is a target source signal.
CN201710816430.4A 2017-09-12 2017-09-12 Method and device for extracting target source from audio Active CN107578784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710816430.4A CN107578784B (en) 2017-09-12 2017-09-12 Method and device for extracting target source from audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710816430.4A CN107578784B (en) 2017-09-12 2017-09-12 Method and device for extracting target source from audio

Publications (2)

Publication Number Publication Date
CN107578784A CN107578784A (en) 2018-01-12
CN107578784B true CN107578784B (en) 2020-12-11

Family

ID=61036413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710816430.4A Active CN107578784B (en) 2017-09-12 2017-09-12 Method and device for extracting target source from audio

Country Status (1)

Country Link
CN (1) CN107578784B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108735227B (en) * 2018-06-22 2020-05-19 北京三听科技有限公司 Method and system for separating sound source of voice signal picked up by microphone array

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102522093A (en) * 2012-01-09 2012-06-27 武汉大学 Sound source separation method based on three-dimensional space audio frequency perception
CN104103277B (en) * 2013-04-15 2017-04-05 北京大学深圳研究生院 A kind of single acoustics vector sensor target voice Enhancement Method based on time-frequency mask
CN105723459B (en) * 2013-11-15 2019-11-26 华为技术有限公司 For improving the device and method of the perception of sound signal
JP6263383B2 (en) * 2013-12-26 2018-01-17 Pioneer DJ株式会社 Audio signal processing apparatus, audio signal processing apparatus control method, and program
CN106537502B (en) * 2014-03-31 2019-10-15 索尼公司 Method and apparatus for generating audio content
EP2960899A1 (en) * 2014-06-25 2015-12-30 Thomson Licensing Method of singing voice separation from an audio mixture and corresponding apparatus

Also Published As

Publication number Publication date
CN107578784A (en) 2018-01-12

Similar Documents

Publication Publication Date Title
US11289072B2 (en) Object recognition method, computer device, and computer-readable storage medium
Cummins et al. An image-based deep spectrum feature representation for the recognition of emotional speech
Wang et al. Deep extractor network for target speaker recovery from single channel speech mixtures
Das et al. Fundamentals, present and future perspectives of speech enhancement
CN103811020B (en) A kind of intelligent sound processing method
Zhou et al. Hidden voice commands: Attacks and defenses on the VCS of autonomous driving cars
CN102129456B (en) Method for monitoring and automatically classifying music factions based on decorrelation sparse mapping
CN106373589B (en) A kind of ears mixing voice separation method based on iteration structure
US10008218B2 (en) Blind bandwidth extension using K-means and a support vector machine
CN104183245A (en) Method and device for recommending music stars with tones similar to those of singers
CN108520756B (en) Method and device for separating speaker voice
WO2015111014A1 (en) A method and a system for decomposition of acoustic signal into sound objects, a sound object and its use
Burgos Gammatone and MFCC features in speaker recognition
CN107578784B (en) Method and device for extracting target source from audio
CN107895582A (en) Towards the speaker adaptation speech-emotion recognition method in multi-source information field
Sofianos et al. H-Semantics: A hybrid approach to singing voice separation
Kundegorski et al. Two-Microphone dereverberation for automatic speech recognition of Polish
Dat et al. Robust speaker verification using low-rank recovery under total variability space
Chougule et al. Channel robust MFCCs for continuous speech speaker recognition
Lopatka et al. Improving listeners' experience for movie playback through enhancing dialogue clarity in soundtracks
Tomchuk Spectral Masking in MFCC Calculation for Noisy Speech
Prasanna Kumar et al. Supervised and unsupervised separation of convolutive speech mixtures using f 0 and formant frequencies
Sadewa et al. Speaker recognition implementation for authentication using filtered MFCC—VQ and a thresholding method
Sofianos et al. Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination
Gergen et al. Reduction of reverberation effects in the MFCC modulation spectrum for improved classification of acoustic signals.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230322

Address after: 201, No. 33, Southeast Avenue, Changshu Hi tech Industrial Development Zone, Suzhou City, Jiangsu Province, 215500

Patentee after: Suzhou Yinman Technology Co.,Ltd.

Address before: 100029 9th Floor (08), No. 19 Ritan North Road, Chaoyang District, Beijing (1056 Chaowai Incubator)

Patentee before: YINMAN (BEIJING) TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right