CN110955745B - Text hash retrieval method based on deep learning - Google Patents

Text hash retrieval method based on deep learning Download PDF

Info

Publication number
CN110955745B
CN110955745B CN201910983514.6A CN201910983514A CN110955745B CN 110955745 B CN110955745 B CN 110955745B CN 201910983514 A CN201910983514 A CN 201910983514A CN 110955745 B CN110955745 B CN 110955745B
Authority
CN
China
Prior art keywords
text
hash
data
trained
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910983514.6A
Other languages
Chinese (zh)
Other versions
CN110955745A (en
Inventor
寿震宇
钱江波
辛宇
谢锡炯
陈海明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Beluga Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201910983514.6A priority Critical patent/CN110955745B/en
Publication of CN110955745A publication Critical patent/CN110955745A/en
Application granted granted Critical
Publication of CN110955745B publication Critical patent/CN110955745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/325Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a text Hash retrieval method based on deep learning, which is characterized in that a bidirectional LSTM model is used for extracting semantic codes corresponding to each original vocabulary data of words embedded in a matrix, then a text convolution neural network is connected in parallel after the bidirectional LSTM model, an attention mechanism is added, then a sign function is used for converting output values of a second full connection layer into corresponding Hash codes, a category label is reconstructed by the Hash codes, finally vector data which is closest to the Hamming distance of the retrieved text Hash codes are searched in the text library Hash codes, and the Hash retrieval process of the retrieved text data is completed, the method has the advantages that the learning capacity of the Hash model to short texts is higher, the added attention mechanism can further improve the expression capacity of features, the category label is reconstructed by the Hash codes by a classification layer, so that when the Hash model learns binary codes, the tag information can be used more finely, so that the retrieval accuracy is higher.

Description

Text hash retrieval method based on deep learning
Technical Field
The invention relates to a text hash retrieval method, in particular to a text hash retrieval method based on deep learning.
Background
With the increase of data scale and dimensionality, the cost of semantic retrieval is increased sharply, and text hashing has received wide attention as an important way for realizing efficient semantic retrieval; however, most text hash algorithms map explicit features or keyword features of text to binary codes directly by using a machine learning mechanism, and these features cannot effectively guarantee semantic similarity between texts, so that the resulting code retrieval efficiency is low.
Disclosure of Invention
The invention aims to solve the technical problem of providing a text hash retrieval method based on deep learning, which has higher retrieval precision and efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows: a text hash retrieval method based on deep learning comprises the following steps:
firstly, acquiring text base data to be retrieved, which consists of S original vocabulary data, and cleaning and pre-processing word segmentation on the original vocabulary data to obtain pre-processed text base data;
define the Hash model to be trained as follows:
secondly-1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
secondly-2, constructing a bidirectional LSTM model, embedding words into a matrix and inputting the words into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
secondly-3, extracting n-gram characteristics of each semantic code by using a text convolution neural network;
secondly-4, extracting attention characteristics of each semantic code by using an attention mechanism;
secondly, combining the n-gram characteristics and the attention characteristics of each semantic code in a front-back splicing mode to obtain the comprehensive characteristics of each semantic code;
secondly-6, setting two first full-connection layers using relu functions as activation functions, and converting the comprehensive features of each semantic code into higher-order features through the first full-connection layers;
secondly, setting a second full-link layer using a tanh function as an activation function, inputting higher-order features of each semantic code into the second full-link layer, and converting output values of the second full-link layer into corresponding Hash codes by using a sign function;
secondly, setting a classification layer, and classifying the Hash codes corresponding to the output value of the second full connection layer;
thirdly, the preprocessed text base data are disordered to obtain the disordered text base data, the disordered text base data are averagely divided into P batches of text base data to be trained, P is more than 1000, and the P batches of text base data to be trained are used for training a Hash model to be trained according to a loss function defined by a similarity preserving principle to obtain a trained Hash model;
fourthly, the trained Hash model is used for coding the preprocessed text library data to obtain corresponding text library Hash codes;
giving retrieval text data, performing cleaning and word segmentation pretreatment on the retrieval text data to obtain preprocessed retrieval text data, and encoding the preprocessed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code;
vector data which is closest to hamming distance of the retrieval text Hash code is searched in the text base Hash code, and a text which is formed by original vocabulary data in the text base data to be retrieved and corresponding to the vector data is used as a final retrieval result, so that the Hash retrieval process of the retrieval text data is completed.
The step three is to train the hash model to be trained, and the specific process of obtaining the trained hash model is as follows:
③ -1, setting the maximum iteration times, and defining the loss function according to the similarity-preserving principle as follows:
Figure BDA0002235981490000021
wherein i is more than or equal to 1 and less than or equal to N,
Figure BDA0002235981490000022
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure BDA0002235981490000023
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure BDA0002235981490000024
is composed of
Figure BDA0002235981490000025
The value of the j-th bit of (a)iExpressing the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, and W expressing the trainable parameters of a classification layer, mean (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the pre-set loss function, | | … | | survival2Is a 2-norm symbol;
and thirdly-2, performing iterative optimization on the model to be trained by using an Adam optimization algorithm according to the loss function, and stopping the iterative process until the set maximum iteration times are reached to obtain the trained Hash model.
Step c 1 is lambda1=0.1,λ2=0.1,λ3=0.1。
The maximum iteration number set in the step III-1 is 50000.
Compared with the prior art, the method has the advantages that firstly, the two-way LSTM model is used for extracting semantic codes corresponding to each original vocabulary data of the word embedded matrix, then, in order to enhance the learning capacity of the hash model to short texts, the text convolutional neural network is connected in parallel behind the two-way LSTM model, the attention mechanism is increased, the expression capacity of characteristics is further improved, finally, the hidden layer is added between the full connection layer and the classification layer and serves as a hash layer, the hidden layer converts the output value of the second full connection layer into corresponding hash codes by using a sign function, the classification layer reconstructs class labels by using the hash codes, so that the hash model can utilize label information more finely while learning the binary codes, finally, vector data closest to the hamming distance of the hash codes of the retrieval text is searched in the hash codes of the text base, and the text formed by the original vocabulary data in the text base data to be retrieved corresponding to the vector data serves as a final retrieval node And finally, completing the Hash retrieval process of the retrieved text data, and displaying that the query accuracy rate of the text Hash retrieval method is improved by adopting a comparison experiment on the short text data set and the common text data set.
Detailed Description
The present invention is described in further detail below.
A text hash retrieval method based on deep learning comprises the following steps:
firstly, acquiring to-be-retrieved text base data consisting of S original vocabulary data, and cleaning and pre-processing word segmentation on the original vocabulary data to obtain the pre-processed text base data.
Define the Hash model to be trained as follows:
secondly-1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
secondly-2, constructing a bidirectional LSTM model, embedding words into a matrix and inputting the words into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
secondly-3, extracting n-gram characteristics of each semantic code by using a text convolution neural network;
secondly-4, extracting attention characteristics of each semantic code by using an attention mechanism;
secondly, combining the n-gram characteristics and the attention characteristics of each semantic code in a front-back splicing mode to obtain the comprehensive characteristics of each semantic code;
secondly-6, setting two first full-connection layers using relu functions as activation functions, and converting the comprehensive features of each semantic code into higher-order features through the first full-connection layers;
secondly, setting a second full-link layer using a tanh function as an activation function, inputting higher-order features of each semantic code into the second full-link layer, and converting output values of the second full-link layer into corresponding Hash codes by using a sign function;
and 8, setting a classification layer to classify the Hash codes corresponding to the output value of the second full-connection layer.
The text base data after the preprocessing is disturbed to obtain the disturbed text base data, the disturbed text base data is averagely divided into P batches of text base data to be trained, P is more than 1000, the P batches of text base data to be trained are used for training a Hash model to be trained according to a loss function defined by a similarity preserving principle to obtain the trained Hash model, and the specific process is as follows:
③ -1, setting the maximum iteration number to be 50000, and defining the loss function according to the similarity-preserving principle as follows:
Figure BDA0002235981490000041
wherein i is more than or equal to 1 and less than or equal to N,
Figure BDA0002235981490000042
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure BDA0002235981490000043
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure BDA0002235981490000044
is composed of
Figure BDA0002235981490000045
The value of the j-th bit of (a)iExpressing the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, and W expressing the trainable parameters of a classification layer, mean (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the predetermined loss function, λ1=0.1,λ2=0.1,λ3=0.1,||…||2Is a 2-norm symbol;
and thirdly-2, performing iterative optimization on the model to be trained by using an Adam optimization algorithm according to the loss function, and stopping the iterative process until the set maximum iteration times are reached to obtain the trained Hash model.
And fourthly, encoding the preprocessed text library data by using the trained Hash model to obtain the corresponding text library Hash codes.
Giving retrieval text data, performing cleaning and word segmentation pretreatment on the retrieval text data to obtain the preprocessed retrieval text data, and encoding the preprocessed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code.
Vector data which is closest to hamming distance of the retrieval text Hash code is searched in the text base Hash code, and a text which is formed by original vocabulary data in the text base data to be retrieved and corresponding to the vector data is used as a final retrieval result, so that the Hash retrieval process of the retrieval text data is completed.
The technical scheme that word embedding processing, bidirectional LSTM model construction, extraction of n-gram features of each semantic code by using a text convolutional neural network, extraction of attention features of each semantic code by using an attention mechanism, and combination of the n-gram features and the attention features of each semantic code by adopting a front-back splicing mode is a common known technical means in the field, and the method is widely applied to the technical field of text retrieval.

Claims (4)

1. A text hash retrieval method based on deep learning is characterized by comprising the following steps:
firstly, acquiring text base data to be retrieved, which consists of S original vocabulary data, and cleaning and pre-processing word segmentation on the original vocabulary data to obtain pre-processed text base data;
define the Hash model to be trained as follows:
secondly-1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
secondly-2, constructing a bidirectional LSTM model, embedding words into a matrix and inputting the words into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
secondly-3, extracting n-gram characteristics of each semantic code by using a text convolution neural network;
secondly-4, extracting attention characteristics of each semantic code by using an attention mechanism;
secondly, combining the n-gram characteristics and the attention characteristics of each semantic code in a front-back splicing mode to obtain the comprehensive characteristics of each semantic code;
secondly-6, setting two first full-connection layers using relu functions as activation functions, and converting the comprehensive features of each semantic code into higher-order features through the first full-connection layers;
secondly, setting a second full-link layer using a tanh function as an activation function, inputting higher-order features of each semantic code into the second full-link layer, and converting output values of the second full-link layer into corresponding Hash codes by using a sign function;
secondly, setting a classification layer, and classifying the Hash codes corresponding to the output value of the second full connection layer;
thirdly, the preprocessed text base data are disordered to obtain the disordered text base data, the disordered text base data are averagely divided into P batches of text base data to be trained, P is more than 1000, and the P batches of text base data to be trained are used for training a Hash model to be trained according to a loss function defined by a similarity preserving principle to obtain a trained Hash model;
fourthly, the trained Hash model is used for coding the preprocessed text library data to obtain corresponding text library Hash codes;
giving retrieval text data, performing cleaning and word segmentation pretreatment on the retrieval text data to obtain preprocessed retrieval text data, and encoding the preprocessed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code;
vector data which is closest to hamming distance of the retrieval text Hash code is searched in the text base Hash code, and a text which is formed by original vocabulary data in the text base data to be retrieved and corresponding to the vector data is used as a final retrieval result, so that the Hash retrieval process of the retrieval text data is completed.
2. The text hash retrieval method based on deep learning of claim 1, wherein the hash model to be trained is trained in the step (iii), and the specific process of obtaining the trained hash model is as follows:
③ -1, setting the maximum iteration times, and defining the loss function according to the similarity-preserving principle as follows:
Figure FDA0002235981480000021
wherein i is more than or equal to 1 and less than or equal to N,
Figure FDA0002235981480000022
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure FDA0002235981480000023
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure FDA0002235981480000024
is composed of
Figure FDA0002235981480000025
The value of the j-th bit of (a)iRepresenting the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, wherein W represents a classification layerTrainable parameters of (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the pre-set loss function, | | … | | survival2Is a 2-norm symbol;
and thirdly-2, performing iterative optimization on the model to be trained by using an Adam optimization algorithm according to the loss function, and stopping the iterative process until the set maximum iteration times are reached to obtain the trained Hash model.
3. The text hash retrieval method based on deep learning of claim 2, wherein λ is defined in the step (c) -11=0.1,λ2=0.1,λ3=0.1。
4. The text hash retrieval method based on deep learning of claim 2, wherein the maximum number of iterations set in the step- (c-1) is 50000.
CN201910983514.6A 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning Active CN110955745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910983514.6A CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910983514.6A CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Publications (2)

Publication Number Publication Date
CN110955745A CN110955745A (en) 2020-04-03
CN110955745B true CN110955745B (en) 2022-04-01

Family

ID=69976421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910983514.6A Active CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Country Status (1)

Country Link
CN (1) CN110955745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488137B (en) * 2020-04-07 2023-04-18 重庆大学 Code searching method based on common attention characterization learning
CN111737406B (en) * 2020-07-28 2022-11-29 腾讯科技(深圳)有限公司 Text retrieval method, device and equipment and training method of text retrieval model
US20230141853A1 (en) * 2021-11-08 2023-05-11 Oracle International Corporation Wide and deep network for language detection using hash embeddings

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932314A (en) * 2018-06-21 2018-12-04 南京农业大学 A kind of chrysanthemum image content retrieval method based on the study of depth Hash

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11222253B2 (en) * 2016-11-03 2022-01-11 Salesforce.Com, Inc. Deep neural network model for processing data through multiple linguistic task hierarchies
US11205103B2 (en) * 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932314A (en) * 2018-06-21 2018-12-04 南京农业大学 A kind of chrysanthemum image content retrieval method based on the study of depth Hash

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于机器学习模型的哈希方法研究进展;寿震宇等;《无线通信技术》;20180915(第03期);全文 *
基于长短时记忆网络的突发灾害事件网络舆情情感识别研究;金占勇等;《情报科学》;20190501(第05期);全文 *
开源软件漏洞检测的混合深度学习方法;李元诚等;《计算机工程与应用》;20181217(第11期);全文 *

Also Published As

Publication number Publication date
CN110955745A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN111611377B (en) Knowledge distillation-based multi-layer neural network language model training method and device
CN110298037B (en) Convolutional neural network matching text recognition method based on enhanced attention mechanism
CN110413785B (en) Text automatic classification method based on BERT and feature fusion
CN109299273B (en) Multi-source multi-label text classification method and system based on improved seq2seq model
CN110059188B (en) Chinese emotion analysis method based on bidirectional time convolution network
CN110263325B (en) Chinese word segmentation system
CN110275936B (en) Similar legal case retrieval method based on self-coding neural network
CN113064959B (en) Cross-modal retrieval method based on deep self-supervision sorting Hash
CN108009148B (en) Text emotion classification representation method based on deep learning
CN110955745B (en) Text hash retrieval method based on deep learning
CN110705296A (en) Chinese natural language processing tool system based on machine learning and deep learning
CN111027595B (en) Double-stage semantic word vector generation method
CN111753081A (en) Text classification system and method based on deep SKIP-GRAM network
CN111143563A (en) Text classification method based on integration of BERT, LSTM and CNN
CN112749274B (en) Chinese text classification method based on attention mechanism and interference word deletion
CN113407660B (en) Unstructured text event extraction method
CN112732864B (en) Document retrieval method based on dense pseudo query vector representation
CN114896388A (en) Hierarchical multi-label text classification method based on mixed attention
CN112306494A (en) Code classification and clustering method based on convolution and cyclic neural network
CN111026887B (en) Cross-media retrieval method and system
CN113128232A (en) Named entity recognition method based on ALBERT and multi-word information embedding
CN110704664B (en) Hash retrieval method
CN110688501B (en) Hash retrieval method of full convolution network based on deep learning
CN114780677A (en) Chinese event extraction method based on feature fusion
CN114138971A (en) Genetic algorithm-based maximum multi-label classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220819

Address after: 530201 room 2239, 22nd floor, No. 1 office building, South plot of Nanning Shimao International Center, No. 17, Pingle Avenue, China (Guangxi) pilot Free Trade Zone, Nanning, Guangxi Zhuang Autonomous Region

Patentee after: GUANGXI ZHONGLIAN ZHIHAO TECHNOLOGY Co.,Ltd.

Address before: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221012

Address after: 530201 No. 1010, 10th Floor, Building A, Changlin Center, No. 17, Geyun Road, Nanning District, China (Guangxi) Pilot Free Trade Zone, Nanning City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi beluga Information Technology Co.,Ltd.

Address before: 530201 room 2239, 22nd floor, No. 1 office building, South plot of Nanning Shimao International Center, No. 17, Pingle Avenue, China (Guangxi) pilot Free Trade Zone, Nanning, Guangxi Zhuang Autonomous Region

Patentee before: GUANGXI ZHONGLIAN ZHIHAO TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right