CN113780180A - Audio long-time fingerprint extraction and matching method - Google Patents

Audio long-time fingerprint extraction and matching method Download PDF

Info

Publication number
CN113780180A
CN113780180A CN202111068271.7A CN202111068271A CN113780180A CN 113780180 A CN113780180 A CN 113780180A CN 202111068271 A CN202111068271 A CN 202111068271A CN 113780180 A CN113780180 A CN 113780180A
Authority
CN
China
Prior art keywords
frame
audio
time
frequency
long
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111068271.7A
Other languages
Chinese (zh)
Other versions
CN113780180B (en
Inventor
陈书军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yu Jiali
Original Assignee
Jiangsu Huanyalishu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Huanyalishu Intelligent Technology Co ltd filed Critical Jiangsu Huanyalishu Intelligent Technology Co ltd
Priority to CN202111068271.7A priority Critical patent/CN113780180B/en
Publication of CN113780180A publication Critical patent/CN113780180A/en
Application granted granted Critical
Publication of CN113780180B publication Critical patent/CN113780180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention belongs to the technical field of audio signal processing, and particularly relates to an audio long-time fingerprint extraction and matching method, which comprises the following steps: s1: inputting an audio signal (PCM) and resampling the audio signal; s2: performing framing, windowing and DFT (discrete Fourier transform) change on the resampled audio signal to obtain a frame frequency spectrum; s3: performing interframe smoothing processing on the frame frequency spectrum to obtain an updated frame frequency spectrum; s4: carrying out frame-level short-time feature extraction on the updated frame frequency spectrum; s5: processing the frame-level short-time features and extracting the frame group long-time features; the method overcomes the defects of short-term and unstable traditional audio fingerprints, extracts the frequency spectrum sub-band characteristics of the audio frame, then calculates the change attribute of the time axis direction to form long-term characteristics, and obtains the optimal similarity and offset during matching.

Description

Audio long-time fingerprint extraction and matching method
Technical Field
The invention belongs to the technical field of audio signal processing, and particularly relates to an audio long-time fingerprint extraction and matching method.
Background
Similar to human biological fingerprints, audio fingerprints refer to effective and robust acoustic features extracted from processed audio signals and are used for uniquely expressing the audio content.
The acoustic features extracted by the existing audio fingerprinting technology are usually based on simple physical features (zero crossing rate, spectral peaks, spectral density, etc.) or auditory perception features (tone, melody, rhythm, etc.), and common algorithms such as shazam [1], chromaprint [2], echoprint [3], etc.
The features extracted by the current audio fingerprint technology algorithm have the defect of short time, only can represent one or a few audio frames, namely audio attributes of dozens or hundreds of milliseconds, and when the fingerprints of audio segments or the whole audio file need to be matched, the features have the risks of unstable expression and large data calculation and storage amount.
Disclosure of Invention
In order to make up for the defects of the prior art and solve the problems that when the fingerprints of an audio clip or the whole audio file are required to be matched, the expression is not stable enough and the risk of large data amount of calculation and storage is high, the invention provides an audio long-time fingerprint extraction and matching method.
The technical scheme adopted by the invention for solving the technical problems is as follows: an audio long-time fingerprint extraction method comprises the following steps:
s1: inputting an audio signal (PCM) and resampling the audio signal;
s2: performing framing, windowing and DFT (discrete Fourier transform) change on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing interframe smoothing processing on the frame frequency spectrum to obtain an updated frame frequency spectrum;
s4: carrying out frame-level short-time feature extraction on the updated frame frequency spectrum;
s5: and processing the frame-level short-time features and extracting the frame group long-time features.
Preferably, in S1, the resampling specifically includes extracting a frequency range of 110Hz to 7KHz as an analysis frequency band, and setting the resampling frequency of the input signal to 16KHz according to the nyquist sampling theorem, so as to avoid signal sampling distortion.
Preferably, in S2, the specific operations of framing, windowing and DFT changing are to frame the resampled signal by 4096 samples (256ms) and 50% overlap; after framing, a Hamming window is added frame by frame and DFT frequency domain transformation is carried out to obtain a frame frequency spectrum, wherein the frame frequency spectrum is the relation between frequency and energy.
Preferably, in S3, the specific operation of inter-frame smoothing is to add adjacent 5 frames of spectral data by using a sliding windowWeight averaging, which aims to increase the spectral stationarity and obtain an updated frame spectrum: m is 0.25M1+0.75M2+M3+0.75M4+0.25M5(ii) a Where the sliding window is stepped one frame at a time.
Preferably, in S4, the specific operation steps of frame-level short-time feature extraction are as follows:
a1: dividing a frame frequency spectrum by a logarithmic frequency domain sub-band;
a2: calculating the average spectral energy of the sub-bands;
a3: and carrying out regularization processing on the subband spectral energy L2 to obtain frame-level short-time features.
Preferably, in a1, since the human ear feels logarithmic to sound, dividing the frame spectrum into logarithmic frequency domain sub-bands, that is, converting the frequency F in the frame spectrum into logarithmic frequency F-log2(f) In the logarithmic frequency domain, the target frequency range log2(110)~log2(7000) The division into 16 sub-bands of equal width.
Preferably, in the a2, the subband average spectral energy is calculated, that is, the average spectral energy is calculated over 16 frequency subbands for each audio frame, thereby forming a 16-dimensional vector.
Preferably, in a3, the sub-band spectral energy L2 regularization processing obtains frame-level short-time features, that is, L2 regularization is performed on the obtained 16-dimensional vector, that is, the obtained short-time features of the audio frame are denoted as V.
Preferably, in S5, the specific operation of frame group long-term feature extraction is to combine a continuous fixed number of audios into a frame group, perform DFT change on the frame-level short-term features again in the time axis direction, and retain the low-frequency stable components to form the frame group long-term features.
An audio long-time fingerprint matching method comprises the following steps:
b1: extracting long-term features from 2 audio files or segments to be matched according to frame groups;
b2: and carrying out frame group level matching on the long-term characteristics of the 2 frame groups, and determining a matching relation.
The invention has the technical effects and advantages that:
1. the invention provides an audio long-time fingerprint extraction and matching method, which is characterized in that after audio signals are subjected to resampling, framing, windowing and DFT change, interframe smoothing operation is carried out on obtained frame spectrums, then frame-level short-time characteristic extraction is carried out, frequency spectrum sub-band characteristics of audio frames are extracted, and then the change attribute of the time axis direction is calculated to form long-time characteristics, so that the rapid extraction of the audio fingerprints is realized, the matching of two or more subsequent sets of audio fingerprints is facilitated, and the defects of short time and instability of the traditional audio fingerprints are overcome.
2. The invention provides an audio long-time fingerprint extraction and matching method, which is characterized in that different audio signals are subjected to fingerprint extraction, after frame group long-time characteristics of audio are rapidly extracted, the similarity between two or more groups of audio signals is calculated by utilizing a similarity meter, then whether the two or more groups of audio signals are matched or not is obtained, and the optimal similarity and offset are obtained simultaneously during matching.
Drawings
The invention will be further explained with reference to the drawings.
FIG. 1 is a flow chart of the fingerprint extraction of an audio signal in the present invention;
FIG. 2 is a schematic diagram of framing in the fingerprint extraction process of the present invention;
FIG. 3 is a flow chart of the frame-level short-term feature extraction in the present invention;
FIG. 4 is a diagram illustrating long-term feature extraction for frame groups according to the present invention;
FIG. 5 is a diagram illustrating the optimal offset and similarity between groups of frames according to the present invention;
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
As shown in fig. 1 to 5, the method for extracting long-term fingerprints in audio according to the present invention includes the following steps:
s1: inputting an audio signal (PCM) and resampling the audio signal;
s2: performing framing, windowing and DFT (discrete Fourier transform) change on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing interframe smoothing processing on the frame frequency spectrum to obtain an updated frame frequency spectrum;
s4: carrying out frame-level short-time feature extraction on the updated frame frequency spectrum;
s5: and processing the frame-level short-time features and extracting the frame group long-time features.
In an embodiment of the present invention, in S1, the resampling specifically extracts a frequency range from 110Hz to 7KHz as an analysis frequency band, and sets the resampling frequency of the input signal to 16KHz according to the nyquist sampling theorem, so as to avoid signal sampling distortion.
As an embodiment of the present invention, in S2, the specific operations of framing, windowing, and DFT changing are to frame the resampled signal by 4096 samples (256ms) and 50% overlap; after framing, a Hamming window is added frame by frame and DFT frequency domain transformation is carried out to obtain a frame frequency spectrum, wherein the frame frequency spectrum is the relation between frequency and energy.
In an embodiment of the present invention, in S3, the specific operation of inter-frame smoothing is to perform weighted average on the adjacent 5 frames of spectrum data by using a sliding window, which aims to increase the spectral stationarity and obtain an updated frame spectrum: m is 0.25M1+0.75M2+M3+0.75M4+0.25M5(ii) a Where the sliding window is stepped one frame at a time.
As an embodiment of the present invention, in S4, the specific operation steps of the frame-level short-time feature extraction are as follows:
a1: dividing a frame frequency spectrum by a logarithmic frequency domain sub-band;
a2: calculating the average spectral energy of the sub-bands;
a3: and carrying out regularization processing on the subband spectral energy L2 to obtain frame-level short-time features.
In the embodiment of the invention, in the a1, since the feeling of the human ear to the sound is logarithmic,therefore, dividing the logarithmic frequency domain sub-band into frame spectrums, i.e. converting the frequency F in the frame spectrums into the logarithmic frequency F-log2(f) In the logarithmic frequency domain, the target frequency range log2(110)~log2(7000) The division into 16 sub-bands of equal width.
In a2, a 16-dimensional vector is formed by calculating the subband average spectral energy, i.e., calculating the average spectral energy in 16 frequency subbands for each audio frame.
In an embodiment of the present invention, in a3, the subband spectral energy L2 regularization processing obtains frame-level short-time features, i.e., performs L2 regularization on the obtained 16-dimensional vectors, i.e., the obtained short-time features of the audio frame are denoted as V.
As an embodiment of the present invention, in S5, the specific operation of frame group long-term feature extraction is to group a continuous fixed number of audios into a frame group, perform DFT change on the frame-level short-term features again in the time axis direction, and retain the low-frequency stable components to form the frame group long-term features.
Specifically, the specific process of extracting the long-term features of the frame group is as follows:
c1: if a frame group is formed from consecutive T audio frames (for example, T ═ 32, i.e., a frame group duration of 4096ms), the frame group is characterized as: [ V ]0,V1,…,VT-1]Where V is a frame-level short-time feature (16-dimensional vector).
C2: for each dimension [ V ] of frame group characteristics0d,V1d,…,V(T-1)d],d∈[0,15]And performing DFT conversion.
C3: taking the former m-level coefficient (for example, m is 12) theta after DFT conversion0,θ1c,θ1s,…,θmc,θmsWherein: theta0Is a coefficient of the first term, θ1c,…,θmcIs a coefficient of a cosine term, theta1s,…,θmsAre coefficients of sinusoidal terms.
C4: new feature [ A ] of frame group formed by 16-dimensional m-level DFT coefficients0,A1c,A1s,…,Amc,Ams]Then the characteristic size is [16 x (2m +1)]。
C5: the 2m +1 DFT coefficients (vectors) are regularized by L2.
C6: the 2m +1 DFT coefficients (vectors) are multiplied by the weight, which is calculated as follows:
Figure BDA0003259200870000061
Figure BDA0003259200870000062
Figure BDA0003259200870000063
i∈[1,m]
wherein the content of the first and second substances,
Figure BDA0003259200870000064
Figure BDA0003259200870000065
b is a Bessel function; sinh is a hyperbolic sine function; the calculation result of the above process is the long-term characteristic [ A ] of the frame group0',A1c',A1s',…,Amc',Ams']。
An audio long-time fingerprint matching method comprises the following steps:
b1: extracting long-term features from two audio files or segments to be matched according to frame groups;
b2: and carrying out frame group level matching on the long-term characteristics of the two frame groups, and determining a matching relation.
Specifically, the specific flow of frame group level matching is as follows:
d1: the long-term characteristics of two frame groups are marked as [ A ] respectively0',A1c',A1s',…,Amc',Ams']And [ B0',B1c',B1s',…,Bmc',Bms']The possible time offset between two frame groups (i.e. the number of audio frames) is denoted t.
D2: the similarity s of the two frame groups at the time offset t is calculated according to the following formula:
Figure BDA0003259200870000066
wherein:
Figure BDA0003259200870000067
t is the number of audio frames in the frame group;<A'|B'>represents the inner product of vector a 'and vector B'.
D3: according to step D2, for all possible offsets T, T e [ - (T-1), (T-1)]Calculating corresponding similarity s, and counting the maximum s of all sbestObtained sbestI.e. the best similarity in the two frame groups, corresponding to tbestTo optimize the similar offset.
D4: setting a similarity threshold s according to the application requirementsthrdIf the optimal similarity obtained in step D3 is greater than or equal to sthrdI.e. the two frame groups are considered to match, whereas the two frame groups are considered to not match.
The invention solves the defects of short-term and unstable traditional audio fingerprints, extracts the frequency spectrum sub-band characteristics of an audio frame, then calculates the change attribute of the time axis direction to form long-term characteristics, and obtains the optimal similarity and offset during matching, and the invention can be applied to the matching of audio streams segment by segment and the integral matching of audio files;
compared with the prior art of the same type, the method is characterized in that:
Figure BDA0003259200870000071
the foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. An audio long-time fingerprint extraction method is characterized by comprising the following steps: the extraction method comprises the following steps:
s1: inputting an audio signal (PCM) and resampling the audio signal;
s2: performing framing, windowing and DFT (discrete Fourier transform) change on the resampled audio signal to obtain a frame frequency spectrum;
s3: performing interframe smoothing processing on the frame frequency spectrum to obtain an updated frame frequency spectrum;
s4: carrying out frame-level short-time feature extraction on the updated frame frequency spectrum;
s5: and processing the frame-level short-time features and extracting the frame group long-time features.
2. The method for extracting long-time fingerprints of audio frequency according to claim 1, wherein: in S1, the resampling specifically includes extracting a frequency range of 110Hz to 7KHz as an analysis frequency band, and setting the resampling frequency of the input signal to 16KHz according to the nyquist sampling theorem, so as to avoid signal sampling distortion.
3. The method of claim 2, wherein the method comprises: in S2, the specific operations of framing, windowing, and DFT changing are to frame the resampled signal by 4096 samples (256ms) and 50% overlap; after framing, a Hamming window is added frame by frame and DFT frequency domain transformation is carried out to obtain a frame frequency spectrum.
4. The method of claim 3, wherein the method comprises: in S3, the specific operation of inter-frame smoothing is to perform weighted average on the adjacent 5 frames of spectrum data by using a sliding window to obtain an updated frame spectrum: m is 0.25M1+0.75M2+M3+0.75M4+0.25M5(ii) a Where the sliding window is stepped one frame at a time.
5. The method of claim 4, wherein the method comprises: in S4, the specific operation steps of the frame-level short-time feature extraction are as follows:
a1: dividing a frame frequency spectrum by a logarithmic frequency domain sub-band;
a2: calculating the average spectral energy of the sub-bands;
a3: and carrying out regularization processing on the subband spectral energy L2 to obtain frame-level short-time features.
6. The method of claim 5, wherein the method further comprises: in a1, dividing the frame spectrum into logarithmic frequencies, i.e., converting the frequency F in the frame spectrum into logarithmic frequencies F log2(f) In the logarithmic frequency domain, the target frequency range log2(110)~log2(7000) The division into 16 sub-bands of equal width.
7. The method of claim 6, wherein the method further comprises: in said a2, the subband average spectral energy is calculated, i.e. for each audio frame, the average spectral energy is calculated over 16 frequency subbands, thereby forming a 16-dimensional vector.
8. The method of claim 7, wherein the method further comprises: in the A3, the sub-band spectral energy L2 is regularized to obtain frame-level short-time features, that is, the obtained 16-dimensional vectors are regularized by L2, that is, the short-time features of the audio frame are denoted as V.
9. The method of claim 8, wherein the method further comprises: in S5, the specific operation of the frame group long-term feature extraction is to form a frame group from a continuous fixed number of audios, perform DFT change on the frame-level short-term features again in the time axis direction, and retain low-frequency stable components to form frame group long-term features.
10. An audio long-time fingerprint matching method is characterized in that: the matching method comprises the following steps:
b1: extracting long-term features from 2 audio files or segments to be matched according to frame groups;
b2: and carrying out frame group level matching on the long-term characteristics of the 2 frame groups, and determining a matching relation.
CN202111068271.7A 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method Active CN113780180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111068271.7A CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111068271.7A CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Publications (2)

Publication Number Publication Date
CN113780180A true CN113780180A (en) 2021-12-10
CN113780180B CN113780180B (en) 2024-06-25

Family

ID=78842960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111068271.7A Active CN113780180B (en) 2021-09-13 2021-09-13 Audio long-term fingerprint extraction and matching method

Country Status (1)

Country Link
CN (1) CN113780180B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102623007A (en) * 2011-01-30 2012-08-01 清华大学 Audio characteristic classification method based on variable duration
CN103116629A (en) * 2013-02-01 2013-05-22 腾讯科技(深圳)有限公司 Matching method and matching system of audio frequency content
CN103854646A (en) * 2014-03-27 2014-06-11 成都康赛信息技术有限公司 Method for classifying digital audio automatically
US20160148620A1 (en) * 2014-11-25 2016-05-26 Facebook, Inc. Indexing based on time-variant transforms of an audio signal's spectrogram
CN107610715A (en) * 2017-10-10 2018-01-19 昆明理工大学 A kind of similarity calculating method based on muli-sounds feature
US10089994B1 (en) * 2018-01-15 2018-10-02 Alex Radzishevsky Acoustic fingerprint extraction and matching
US20180349494A1 (en) * 2016-04-19 2018-12-06 Tencent Technolog (Shenzhen) Company Limited Song determining method and device and storage medium
CN111382302A (en) * 2018-12-28 2020-07-07 中国科学院声学研究所 Audio sample retrieval method based on variable speed template
CN112035696A (en) * 2020-09-09 2020-12-04 兰州理工大学 Voice retrieval method and system based on audio fingerprints

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102623007A (en) * 2011-01-30 2012-08-01 清华大学 Audio characteristic classification method based on variable duration
CN103116629A (en) * 2013-02-01 2013-05-22 腾讯科技(深圳)有限公司 Matching method and matching system of audio frequency content
CN103854646A (en) * 2014-03-27 2014-06-11 成都康赛信息技术有限公司 Method for classifying digital audio automatically
US20160148620A1 (en) * 2014-11-25 2016-05-26 Facebook, Inc. Indexing based on time-variant transforms of an audio signal's spectrogram
US20180349494A1 (en) * 2016-04-19 2018-12-06 Tencent Technolog (Shenzhen) Company Limited Song determining method and device and storage medium
CN107610715A (en) * 2017-10-10 2018-01-19 昆明理工大学 A kind of similarity calculating method based on muli-sounds feature
US10089994B1 (en) * 2018-01-15 2018-10-02 Alex Radzishevsky Acoustic fingerprint extraction and matching
CN111382302A (en) * 2018-12-28 2020-07-07 中国科学院声学研究所 Audio sample retrieval method based on variable speed template
CN112035696A (en) * 2020-09-09 2020-12-04 兰州理工大学 Voice retrieval method and system based on audio fingerprints

Also Published As

Publication number Publication date
CN113780180B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN107610715B (en) Similarity calculation method based on multiple sound characteristics
US6691090B1 (en) Speech recognition system including dimensionality reduction of baseband frequency signals
CN109147796B (en) Speech recognition method, device, computer equipment and computer readable storage medium
CN109256127B (en) Robust voice feature extraction method based on nonlinear power transformation Gamma chirp filter
CN108564965B (en) Anti-noise voice recognition system
CN112017682B (en) Single-channel voice simultaneous noise reduction and reverberation removal system
CN110931023B (en) Gender identification method, system, mobile terminal and storage medium
CN110189766B (en) Voice style transfer method based on neural network
CN105679321B (en) Voice recognition method, device and terminal
US6701291B2 (en) Automatic speech recognition with psychoacoustically-based feature extraction, using easily-tunable single-shape filters along logarithmic-frequency axis
CN114613389A (en) Non-speech audio feature extraction method based on improved MFCC
CN110970044B (en) Speech enhancement method oriented to speech recognition
Sanam et al. Enhancement of noisy speech based on a custom thresholding function with a statistically determined threshold
CN113744715A (en) Vocoder speech synthesis method, device, computer equipment and storage medium
CN109102818A (en) A kind of denoising audio sample algorithm based on signal frequency probability density function profiles
CN110197657B (en) Dynamic sound feature extraction method based on cosine similarity
CN110379438B (en) Method and system for detecting and extracting fundamental frequency of voice signal
CN113780180A (en) Audio long-time fingerprint extraction and matching method
Hossain et al. Dual-transform source separation using sparse nonnegative matrix factorization
CN116825113A (en) Spectrogram generation method, device, equipment and computer readable storage medium
CN111341327A (en) Speaker voice recognition method, device and equipment based on particle swarm optimization
CN112863517B (en) Speech recognition method based on perceptual spectrum convergence rate
CN115410602A (en) Voice emotion recognition method and device and electronic equipment
CN112309404B (en) Machine voice authentication method, device, equipment and storage medium
CN112233693B (en) Sound quality evaluation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240531

Address after: No. 41-1, Chuanshan Group, Qingxi Community, Qingxi Street Office, Guichi District, Chizhou City, Anhui Province, 247100

Applicant after: Yu Jiali

Country or region after: China

Address before: 224000 room 203-2, building 6, No. 49, Wengang South Road, Yannan high tech Zone, Yancheng City, Jiangsu Province

Applicant before: Jiangsu huanyalishu Intelligent Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant