CN113780180B - Audio long-term fingerprint extraction and matching method - Google Patents

Audio long-term fingerprint extraction and matching method Download PDF

Info

Publication number
CN113780180B
CN113780180B CN202111068271.7A CN202111068271A CN113780180B CN 113780180 B CN113780180 B CN 113780180B CN 202111068271 A CN202111068271 A CN 202111068271A CN 113780180 B CN113780180 B CN 113780180B
Authority
CN
China
Prior art keywords
frame
frequency
audio
time
long
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111068271.7A
Other languages
Chinese (zh)
Other versions
CN113780180A (en
Inventor
陈书军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yu Jiali
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111068271.7A priority Critical patent/CN113780180B/en
Publication of CN113780180A publication Critical patent/CN113780180A/en
Application granted granted Critical
Publication of CN113780180B publication Critical patent/CN113780180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention belongs to the technical field of audio signal processing, in particular to an audio long-time fingerprint extraction and matching method, which comprises the following steps: s1: an input audio signal (PCM) and resampling the audio signal; s2: framing, windowing and DFT (discrete Fourier transform) change are carried out on the resampled audio signal to obtain a frame frequency spectrum; s3: performing inter-frame smoothing on the frame frequency to obtain an updated frame frequency; s4: extracting frame-level short-time features from the updated frame spectrum; s5: processing the frame-level short-time features and extracting frame group long-time features; the method and the device solve the defects of the traditional audio fingerprint such as short time and instability, extract the frequency spectrum sub-band characteristics of the audio frame, calculate the change attribute of the time axis direction to form long-time characteristics, and obtain the optimal similarity and offset simultaneously during matching.

Description

Audio long-term fingerprint extraction and matching method
Technical Field
The invention belongs to the technical field of audio signal processing, and particularly relates to an audio long-time fingerprint extraction and matching method.
Background
Similar to human biological fingerprints, audio fingerprints refer to the fact that audio signals are processed to extract effective and robust acoustic features so as to uniquely express the audio content, and the audio fingerprint technology is widely used in the fields of audio retrieval, signal comparison, copyright protection and the like.
Acoustic features extracted by existing audio fingerprinting techniques are typically based on simple physical features (zero-crossing rate, spectral peaks, spectral density, etc.) or auditory perception features (pitch, melody, rhythm, etc.), common algorithms such as shazam [1], chromaprint [2], echoprint [3], etc.
Features extracted by the existing audio fingerprint technology algorithm have a short-term defect, and can only represent one or a few audio frames, namely, audio attributes of tens or hundreds of milliseconds, and when fingerprints of audio fragments or whole audio files are required to be matched, risks of unstable expression and large calculation and storage data amount occur to the features.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a long-term fingerprint extraction and matching method for audio, which solves the problems that when fingerprints are matched on audio fragments or whole audio files, the characteristics are expressed inadequately and the risk of large calculated and stored data amount.
The technical scheme adopted for solving the technical problems is as follows: an audio long-term fingerprint extraction method, the extraction method comprising the following steps:
s1: an input audio signal (PCM) and resampling the audio signal;
s2: framing, windowing and DFT (discrete Fourier transform) change are carried out on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing inter-frame smoothing on the frame frequency to obtain an updated frame frequency;
S4: extracting frame-level short-time features from the updated frame spectrum;
S5: and processing the frame-level short-time features and extracting frame group long-time features.
Preferably, in the step S1, the specific operation of resampling is to extract the frequency range of 110Hz-7KHz as the analysis frequency band, and set the resampling frequency of the input signal to 16KHz according to the Nyquist sampling theorem, so as to avoid signal sampling distortion.
Preferably, in S2, the specific operations of framing, windowing and DFT variation are to frame the resampled signal according to 4096 samples (256 ms) and a 50% overlap; after framing, a hamming window is added frame by frame and DFT frequency domain transformation is performed to obtain a frame spectrum, wherein the frame spectrum is the relation between frequency and energy.
Preferably, in the step S3, the specific operation of the inter-frame smoothing is to perform weighted average on the adjacent 5 frame frequency spectrum data by using a sliding window, so as to increase the stability of the frequency spectrum and obtain an updated frame frequency spectrum:
; wherein the sliding window is stepped one frame at a time.
Preferably, in the step S4, the specific operation steps of frame-level short-time feature extraction are as follows:
a1: dividing a frame spectrum into logarithmic frequency domain sub-bands;
A2: calculating sub-band average spectral energy;
a3: regularization processing is carried out on the subband spectrum energy L2 to obtain frame-level short-time characteristics.
Preferably, in the A1, since the human ear feeling of sound is logarithmic, dividing the logarithmic frequency domain sub-bands into frame frequencies converts the frequency f in the frame frequency spectrum into logarithmic frequenciesIn the logarithmic frequency domain, the target frequency rangeDivided into 16 sub-bands of equal width.
Preferably, in said A2, the subband average spectral energy is calculated, i.e. for each audio frame, the average spectral energy is calculated over 16 frequency subbands, forming a 16-dimensional vector.
Preferably, in the A3, the sub-band spectrum energy L2 regularization processing obtains a frame-level short-time feature, that is, L2 regularization is performed on the obtained 16-dimensional vector, that is, the short-time feature of the audio frame is denoted as V.
Preferably, in the step S5, the specific operation of extracting the long-time frame group features is to form a continuous fixed number of audio frequency groups into frame groups, perform DFT change again on the short-time frame level features in the time axis direction, and retain low-frequency stable components to form the long-time frame group features.
An audio long-term fingerprint matching method, the matching method comprising the steps of:
b1: extracting long-term features of 2 audio files or fragments to be matched according to a frame group;
B2: and carrying out frame group level matching on the 2 frame group long-time features, and determining a matching relationship.
The invention has the technical effects and advantages that:
1. According to the audio long-term fingerprint extraction and matching method provided by the invention, the audio signal is subjected to fingerprint extraction, namely, after resampling, framing, windowing and DFT change, the obtained frame spectrum is subjected to inter-frame smoothing operation, then frame-level short-time characteristic extraction is performed, the spectrum sub-band characteristic of the audio frame is extracted, and then the change attribute of the time axis direction is calculated to form long-term characteristics, so that the audio fingerprint is rapidly extracted, the matching between two or more groups of subsequent audio fingerprints is facilitated, and the defects of shortness and instability of the traditional audio fingerprint are overcome.
2. The invention provides a long-term fingerprint extraction and matching method for audio, which is characterized in that fingerprint extraction is carried out on different audio signals, after the long-term characteristics of a frame group of audio are rapidly extracted, similarity between two or more groups of audio signals is calculated by using similarity, then whether the two or more groups of audio signals are matched or not is obtained, and the optimal similarity and offset are obtained simultaneously during matching.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of fingerprint extraction of an audio signal according to the present invention;
FIG. 2 is a schematic diagram of framing in the process of extracting the finger print in the present invention;
FIG. 3 is a flow chart of frame-level short-term feature extraction in the present invention;
FIG. 4 is a schematic diagram of feature extraction at frame group length in the present invention;
FIG. 5 is a diagram of the best offset and similarity between groups of frames according to the present invention;
Detailed Description
The invention is further described in connection with the following detailed description in order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the invention easy to understand.
As shown in fig. 1 to 5, the method for extracting long-term fingerprints of audio according to the present invention comprises the following steps:
s1: an input audio signal (PCM) and resampling the audio signal;
s2: framing, windowing and DFT (discrete Fourier transform) change are carried out on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing inter-frame smoothing on the frame frequency to obtain an updated frame frequency;
S4: extracting frame-level short-time features from the updated frame spectrum;
S5: and processing the frame-level short-time features and extracting frame group long-time features.
As an implementation mode of the invention, in the step S1, the specific operation of resampling is to extract the frequency range of 110Hz-7KHz as an analysis frequency band, and set the resampling frequency of the input signal to 16KHz according to the Nyquist sampling theorem, so as to avoid signal sampling distortion.
As an embodiment of the present invention, in S2, the specific operations of framing, windowing, and DFT variation are to frame the resampled signal according to 4096 samples (256 ms) and 50% overlapping degree; after framing, a hamming window is added frame by frame and DFT frequency domain transformation is performed to obtain a frame spectrum, wherein the frame spectrum is the relation between frequency and energy.
In one embodiment of the present invention, in S3, the specific operation of the inter-frame smoothing is to perform weighted average on the adjacent 5 frame frequency data by using a sliding window, which aims to increase the stability of the frequency spectrum, and obtain the updated frame frequency spectrum:
; wherein the sliding window is stepped one frame at a time.
In S4, the specific operation steps of the frame-level short-time feature extraction are as follows:
a1: dividing a frame spectrum into logarithmic frequency domain sub-bands;
A2: calculating sub-band average spectral energy;
a3: regularization processing is carried out on the subband spectrum energy L2 to obtain frame-level short-time characteristics.
In the embodiment of the present invention, in the A1, since the human ear feeling of the sound is logarithmic, dividing the logarithmic frequency domain sub-band into the frame frequency spectrum, i.e., converting the frequency f in the frame frequency spectrum into the logarithmic frequencyIn the logarithmic frequency domain, the target frequency range/>Divided into 16 sub-bands of equal width.
As an embodiment of the present invention, in the A2, the subband average spectral energy is calculated, i.e. for each audio frame, the average spectral energy is calculated over 16 frequency subbands, thereby forming a 16-dimensional vector.
In the embodiment of the present invention, in the A3, the sub-band spectrum energy L2 regularization processing obtains a frame-level short-time feature, that is, L2 regularizing the obtained 16-dimensional vector, that is, the short-time feature of the audio frame, denoted as V.
In S5, the specific operation of extracting the long-time features of the frame group is to form a continuous fixed number of audio groups, perform DFT change again on the short-time features of the frame level in the time axis direction, and retain the low-frequency stable components to form the long-time features of the frame group.
Specifically, the specific flow of feature extraction during frame group length is as follows:
C1: with consecutive T audio frames (e.g., t=32, i.e., frame group duration 4096 ms) forming a frame group, the frame group is characterized as: v 0, V1, …, VT-1, where V is the frame-level short-term feature (16-dimensional vector).
C2: DFT conversion is performed on each dimension [ V 0d, V1d, …, V(T-1)d ], d E [0,15] of the frame group feature.
And C3: taking the first m-level coefficient (e.g. m=12) after DFT conversion,/>,/>,…,/>,/>Wherein: /(I)Is the first coefficient,/>,…,/>Is cosine term coefficient,/>,…,/>Is a sine term coefficient.
And C4: the 16-dimensional m-level DFT coefficients are structured into a new feature [ a 0, A1c, A1s, …, Amc, Ams ], then the feature size is [16 x (2m+1) ].
C5: 2m+1 DFT coefficients (vectors) are L2 regularized.
C6:2m+1 DFT coefficients (vectors) are multiplied by weights, and the calculation formula is as follows:
Wherein, ;/>; B is a Bessel function; sinh is a hyperbolic sine function; the calculation result of the flow is that the long-term characteristic [/> ] of the frame group is formed,/>,/>,…,/> ]。
An audio long-term fingerprint matching method, the matching method comprising the steps of:
b1: extracting long-term features of two audio files or fragments to be matched according to a frame group;
b2: and performing frame group level matching on the two frame group long-time features, and determining a matching relationship.
Specifically, the specific process of frame group level matching is as follows:
d1: two frame group long time features are respectively marked as [ [ ,/>,/>,…,/>,/>And [/>),/>,…,/>,/>The possible time offset (i.e., the number of audio frames) between two frame groups is denoted t.
D2: the similarity s of two frame groups at time offset t is calculated according to the following formula:
Wherein: t is the number of audio frames in the frame group; /(I) Representing vectors/>Sum vector/>Is a product of the inner product of (a).
D3: according to step D2, for all possible offsets T, T E [ - (T-1), (T-1) ], calculating the corresponding similarity s, and counting the largest value s best of all s, the obtained s best is the best similarity in the two frame groups, and the corresponding T best is the best similarity offset.
D4: and setting a similarity threshold s thrd according to the application requirement, and if the optimal similarity obtained in the step D3 is greater than or equal to s thrd, considering that the two frame groups are matched, otherwise, considering that the two frame groups are not matched.
The invention solves the defects of the traditional audio fingerprint such as short time and instability, extracts the frequency spectrum sub-band characteristics of the audio frame, then calculates the change attribute of the time axis direction to form long-time characteristics, and obtains the optimal similarity and offset simultaneously during matching;
Compared with the prior art of the same kind, the method is characterized in that:
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (2)

1. A method for extracting long-term fingerprints of audio frequency is characterized by comprising the following steps: the extraction method comprises the following steps:
s1: an input audio signal (PCM) and resampling the audio signal;
s2: framing, windowing and DFT (discrete Fourier transform) change are carried out on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing inter-frame smoothing on the frame frequency to obtain an updated frame frequency;
S4: extracting frame-level short-time features from the updated frame spectrum;
S5: processing the frame-level short-time features and extracting frame group long-time features;
in the S1, the specific operation of resampling is to extract the frequency range of 110Hz-7KHz as an analysis frequency band, and set the resampling frequency of an input signal to 16KHz according to the Nyquist sampling theorem, so as to avoid signal sampling distortion;
In the step S2, the specific operations of framing, windowing and DFT variation are to frame the resampled signal according to 4096 samples (256 ms) and 50% overlapping degree; after framing, adding a hamming window frame by frame and carrying out DFT frequency domain transformation to obtain a frame frequency spectrum;
In the step S3, the specific operation of the inter-frame smoothing is to perform weighted average on the adjacent 5 frame data by using a sliding window, so as to obtain an updated frame frequency:
; wherein the sliding window is stepped one frame at a time;
In the step S4, the specific operation steps of frame-level short-time feature extraction are as follows:
a1: dividing a frame spectrum into logarithmic frequency domain sub-bands;
A2: calculating sub-band average spectral energy;
a3: regularization processing is carried out on subband spectrum energy L2 to obtain frame-level short-time characteristics;
In the A1, the frequency f in the frame spectrum is converted into the logarithmic frequency by dividing the frequency sub-band into the frame frequency In the logarithmic frequency domain, the target frequency range/>Dividing into 16 sub-bands with equal widths;
in said A2, the sub-band average spectral energy is calculated, i.e. for each audio frame, the average spectral energy is calculated over 16 frequency sub-bands, forming a 16-dimensional vector;
in the A3, the sub-band spectrum energy L2 regularization treatment is carried out to obtain frame-level short-time characteristics, namely, L2 regularization is carried out on the obtained 16-dimensional vector, namely, the short-time characteristics of the audio frame are marked as V;
In the step S5, the specific operation of frame group long-time feature extraction is to form a frame group by continuous fixed number of audios, perform DFT change again on the frame-level short-time features in the time axis direction, and reserve low-frequency stable components to form the frame group long-time features;
The specific flow of the feature extraction during frame group length is as follows:
C1: the continuous T audio frames form a frame group, and the frame group is characterized as follows: v 0, V1, …, VT-1, where V is the frame-level short-term feature (16-dimensional vector);
C2: performing DFT conversion on each dimension [ V 0d, V1d, …, V(T-1)d ], d E [0,15] of the frame group characteristics;
and C3: taking the first m-level coefficient (e.g. m=12) after DFT conversion ,/>,/>,…,/>,/>Wherein: /(I)Is the first coefficient,/>,…,/>Is cosine term coefficient,/>,…,/>Is a sine term coefficient;
And C4: constructing a new feature [ A 0, A1c, A1s, …, Amc, Ams ] of the frame group by using the 16-dimensional m-level DFT coefficients, wherein the feature size is [16 (2m+1) ];
C5: carrying out L2 regularization on 2m+1 DFT coefficients (vectors);
c6:2m+1 DFT coefficients (vectors) are multiplied by weights, and the calculation formula is as follows:
Wherein, ;/>; B is a Bessel function; sinh is a hyperbolic sine function; according to the flow calculation result, namely forming the long-term characteristic [/> ] of the frame group,/>,/>,…,/>,/>]。
2. An audio long-term fingerprint matching method, wherein long-term fingerprints are extracted by adopting the audio long-term fingerprint extraction method as claimed in claim 1, and the method is characterized in that: the matching method comprises the following steps:
b1: extracting long-term features of 2 audio files or fragments to be matched according to a frame group;
B2: performing frame group level matching on the 2 frame group long-time features, and determining a matching relationship;
The specific flow of the frame group level matching is as follows:
d1: two frame group long time features are respectively marked as [ [ ,/>,/>,…,/>,/>And [/>),/>,…,/>,/>The possible time offset (i.e., audio frame number) between two frame groups is denoted as t;
D2: the similarity s of two frame groups at time offset t is calculated according to the following formula:
Wherein: T is the number of audio frames in the frame group; (v)/(v) Representing vectors/>Sum vector/>Is an inner product of (2);
D3: according to step D2, for all possible offsets T, T E [ - (T-1), (T-1) ], calculating the corresponding similarity s, and counting the largest value s best in all s, wherein the obtained s best is the optimal similarity in the two frame groups, and the corresponding T best is the optimal similarity offset;
D4: and setting a similarity threshold s thrd according to the application requirement, and if the optimal similarity obtained in the step D3 is greater than or equal to s thrd, considering that the two frame groups are matched, otherwise, considering that the two frame groups are not matched.
CN202111068271.7A 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method Active CN113780180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111068271.7A CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111068271.7A CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Publications (2)

Publication Number Publication Date
CN113780180A CN113780180A (en) 2021-12-10
CN113780180B true CN113780180B (en) 2024-06-25

Family

ID=78842960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111068271.7A Active CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Country Status (1)

Country Link
CN (1) CN113780180B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089994B1 (en) * 2018-01-15 2018-10-02 Alex Radzishevsky Acoustic fingerprint extraction and matching
CN112035696A (en) * 2020-09-09 2020-12-04 兰州理工大学 Voice retrieval method and system based on audio fingerprints

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102623007B (en) * 2011-01-30 2014-01-01 清华大学 Audio characteristic classification method based on variable duration
CN103116629B (en) * 2013-02-01 2016-04-20 腾讯科技(深圳)有限公司 A kind of matching process of audio content and system
CN103854646B (en) * 2014-03-27 2018-01-30 成都康赛信息技术有限公司 A kind of method realized DAB and classified automatically
US9837101B2 (en) * 2014-11-25 2017-12-05 Facebook, Inc. Indexing based on time-variant transforms of an audio signal's spectrogram
CN105868397B (en) * 2016-04-19 2020-12-01 腾讯科技(深圳)有限公司 Song determination method and device
CN107610715B (en) * 2017-10-10 2021-03-02 昆明理工大学 Similarity calculation method based on multiple sound characteristics
CN111382302B (en) * 2018-12-28 2023-08-11 中国科学院声学研究所 Audio sample retrieval method based on variable speed template

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089994B1 (en) * 2018-01-15 2018-10-02 Alex Radzishevsky Acoustic fingerprint extraction and matching
CN112035696A (en) * 2020-09-09 2020-12-04 兰州理工大学 Voice retrieval method and system based on audio fingerprints

Also Published As

Publication number Publication date
CN113780180A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN107610715B (en) Similarity calculation method based on multiple sound characteristics
CN102982801B (en) Phonetic feature extracting method for robust voice recognition
JP5507596B2 (en) Speech enhancement
CN109147796B (en) Speech recognition method, device, computer equipment and computer readable storage medium
KR101269296B1 (en) Neural network classifier for separating audio sources from a monophonic audio signal
CN109256127B (en) Robust voice feature extraction method based on nonlinear power transformation Gamma chirp filter
CN105872855A (en) Labeling method and device for video files
WO2014153800A1 (en) Voice recognition system
JP2018521366A (en) Method and system for decomposing acoustic signal into sound object, sound object and use thereof
CN103514884A (en) Communication voice denoising method and terminal
CN108564965B (en) Anti-noise voice recognition system
CN107274911A (en) A kind of similarity analysis method based on sound characteristic
CN108682432B (en) Speech emotion recognition device
CN113327626A (en) Voice noise reduction method, device, equipment and storage medium
WO2017045429A1 (en) Audio data detection method and system and storage medium
CN101577116B (en) Extracting method of MFCC coefficients of voice signal, device and Mel filtering method
CN105679321B (en) Voice recognition method, device and terminal
CN109065043A (en) A kind of order word recognition method and computer storage medium
CN110767248B (en) Anti-modulation interference audio fingerprint extraction method
Sanam et al. Enhancement of noisy speech based on a custom thresholding function with a statistically determined threshold
CN109102818B (en) Denoising audio sampling algorithm based on signal frequency probability density function distribution
CN113780180B (en) Audio long-term fingerprint extraction and matching method
CN110379438B (en) Method and system for detecting and extracting fundamental frequency of voice signal
CN110197657B (en) Dynamic sound feature extraction method based on cosine similarity
CN116884431A (en) CFCC (computational fluid dynamics) feature-based robust audio copy-paste tamper detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240531

Address after: No. 41-1, Chuanshan Group, Qingxi Community, Qingxi Street Office, Guichi District, Chizhou City, Anhui Province, 247100

Applicant after: Yu Jiali

Country or region after: China

Address before: 224000 room 203-2, building 6, No. 49, Wengang South Road, Yannan high tech Zone, Yancheng City, Jiangsu Province

Applicant before: Jiangsu huanyalishu Intelligent Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant