CN113642341A - Deep confrontation generation method for solving scarcity of medical text data - Google Patents

Deep confrontation generation method for solving scarcity of medical text data Download PDF

Info

Publication number
CN113642341A
CN113642341A CN202110741504.9A CN202110741504A CN113642341A CN 113642341 A CN113642341 A CN 113642341A CN 202110741504 A CN202110741504 A CN 202110741504A CN 113642341 A CN113642341 A CN 113642341A
Authority
CN
China
Prior art keywords
data
generator
language
target language
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110741504.9A
Other languages
Chinese (zh)
Inventor
林余楚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyi Information Technology Hengqin Co ltd
Original Assignee
Shenyi Information Technology Hengqin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyi Information Technology Hengqin Co ltd filed Critical Shenyi Information Technology Hengqin Co ltd
Priority to CN202110741504.9A priority Critical patent/CN113642341A/en
Publication of CN113642341A publication Critical patent/CN113642341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/44Statistical methods, e.g. probability models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

A deep confrontation generation method for solving the scarcity of medical text data relates to the field of machine translation. The invention aims to solve the problem that the existing medical text labeling method cannot acquire a large amount of accurate medical labeling texts, so that the medical text training data is scarce. The invention comprises the following steps: inputting a medical text source language into a trained generation countermeasure network GAN to obtain a medical text target language; the generating the antagonistic network GAN comprises: a generator and a discriminator; a machine translation NMT model based on a neural network is adopted in the learning process of the discriminator; the encoder is used for encoding a source language and inputting the finished code to the decoder; the decoder is used for converting the codes input by the encoder into a target language; the generator adopts a GRU network model and is used for encoding and decoding source language data to generate a generator target language y'; the method is used for solving the problem of scarcity of medical text training data.

Description

Deep confrontation generation method for solving scarcity of medical text data
Technical Field
The invention relates to the field of machine translation, in particular to a deep confrontation generation method for solving the scarcity of medical text data.
Background
The medical text database has a rich data format and contains various information related to clinical medical records, such as first page information of medical records, disease course record information, various physical examination results, pathological parameter information, test and experiment results, doctor diagnosis records, and related patient symptoms and complaints. Medical texts are widely used in the medical field in various aspects of clinical diagnosis and treatment. The medical texts have wide application, and have high research value and commercial value in the fields of medical natural language understanding, text automatic summarization, information extraction, information filtering, information retrieval and the like. Therefore, enough medical text data can support clinical and scientific research development and benefit human health, but due to the limitation of clinical cases and medical cases, the medical information database cannot completely reflect any disease characteristics, and the shortage of the medical text data causes the incompleteness of medical information, so that how to solve the shortage of the medical text data becomes the research focus in the field.
Machine translation of medical text currently uses an encoder-decoder, taking the example of translating english medical text into chinese medical text, the encoder first encodes english text into high-dimensional vectors, and then the decoder translates the high-dimensional vectors into chinese text. However, the conventional universal encoder-decoder is trained by using unbalanced training data sets of massive English medical texts and scarce Chinese medical texts, and the accuracy of general language translation cannot be achieved. Aiming at the problem of low precision, the problem of low translation quality can be solved by labeling medical data labeling experts at present, but the labeling of medical text data has high requirement on professional knowledge, the quantity of medical documents is very large, and different fields need different experts for labeling, so that a large quantity of medical text labeling experts are needed through manual labeling, and the waste of manpower and material resources can be caused. It is therefore currently difficult to obtain accurate large quantities of medical text by manual labeling by medical professionals. Aiming at the problem of scarcity (target language) of the current corpus data, data enhancement can be performed through similar language expression at present, but the professional requirement of language expression in the medical field is high, the semantic expression of translation is more strict, and therefore high-quality translation effect cannot be achieved by using the similar language expression. Therefore, the current medical text labeling method cannot acquire a large amount of accurate medical labeling texts, so that the shortage of medical text training data is caused, the problem becomes an important bottleneck for improving the machine translation quality of the medical text, and the important problem to be solved urgently is provided.
Disclosure of Invention
The invention aims to solve the problem that the existing medical text labeling method cannot acquire a large amount of accurate medical labeling texts, so that the shortage of medical text training data is caused, and provides a deep confrontation generation method for solving the shortage of medical text data.
A depth countermeasure generation method for solving the scarcity of medical text data comprises the following specific processes: and inputting the medical text source language into a trained generation countermeasure network GAN to obtain a target language.
The generating the antagonistic network GAN comprises: a generator and a discriminator;
a machine translation NMT model based on a neural network is adopted in the learning process of the discriminator, and an encoder-decoder model is adopted in the machine translation NMT model based on the neural network; the encoder is used for encoding a source language and inputting the finished code to the decoder; the decoder is used for converting the codes input by the encoder into a target language;
the machine translation NMT model is used to mark a source language from genuine data
Figure BDA0003141594450000021
Training to a target language, evaluating the automatically generated data, and judging whether the data is real training data;
wherein M is a corpus training data index value, the value is 1 to M, M is the total number of corpus training data entries, and XmIs a collection of training data, i.e., source languages, y is a discriminator target language, ymIs the mth target language;
the generator adopts a GRU network model and is used for encoding and decoding source language data to generate a generator target language y';
the generator takes the sparse text data as target data and takes the text data with rich linguistic data as a source language for training;
the generator target language y' performs timing information learning inside the GRU through a reset gate and an update gate.
The invention has the beneficial effects that:
the invention utilizes a generation countermeasure network (GAN) generator to generate forged data through a series of models to deceive a discriminator, and combines the advantage of guessing the approximate distribution of real data according to the feedback of the discriminator on the forged data with the advantage that machine translation (NMT) based on a neural network can complete the translation task from a high-quality source language to a target language, thereby solving the problem that a large amount of accurate medical labeling texts cannot be obtained, and further causing the scarcity of medical labeling text training data.
Drawings
FIG. 1 is a system framework diagram of the present invention;
FIG. 2 is a diagram of a generator model.
Detailed Description
The first embodiment is as follows: the depth countermeasure generation method for solving the scarcity of medical text data in the embodiment specifically comprises the following processes: inputting a medical text source language into a trained generation countermeasure network GAN to obtain a medical text target language;
the generating the antagonistic network GAN comprises: a generator and a discriminator;
a machine translation NMT model based on a neural network is adopted in the learning process of the discriminator, and an encoder-decoder model is adopted in the machine translation NMT model based on the neural network; the encoder is used for encoding a source language and inputting the finished code to the decoder; the decoder is used for converting the codes input by the encoder into a target language;
the machine translation NMT model is used to mark a source language from genuine data
Figure BDA0003141594450000031
Training to a target language, evaluating the automatically generated data, and judging whether the data is real training data;
wherein M is a corpus training data index value, the value is 1 to M, M is the total number of corpus training data entries, and XmIs a collection of training data, i.e., source languages, y is a discriminator target language, ymIs the mth target language;
the generator adopts a GRU network model and is used for encoding and decoding source language data to generate a generator target language y';
the generator takes the sparse text data as target data and takes the text data with rich linguistic data as a source language for training;
the generator target language y' performs timing information learning inside the GRU through a reset gate and an update gate.
The specific training process for generating the countermeasure network comprises the following steps: inputting a source language x and a discriminator target language y into a discriminator, training through an NMT model to obtain discrimination capability of a target language and a generated language, taking the source language x as the input of a generator G, generating a generator target language y ' by the generator G, taking the source language x and the generator target language y ' as the input of the discriminator, evaluating the generated generator target language y ' through the recognition of the discriminator, obtaining a reward r by the generator G, and obtaining a trained generated countermeasure network GAN after the training is finished until the discriminator can not judge that the input is from a real target language or is generated by the generator.
The method adopts a discriminator similar model, takes the existing source language as generator input due to the scarcity of the target language, and then adopts an encoder-decoder model similar to NMT;
the second embodiment is as follows: when the generator target language y' performs time sequence information learning through the reset gate and the update gate in the GRU, the state output is as follows:
Figure BDA0003141594450000032
wherein the content of the first and second substances,
Figure BDA0003141594450000033
zt=σ(Wz·[yt-1,xt])
rt=σ(Wr·[yt-1,xt])
where σ is a sigmoid activation function, t denotes time, rtIs a reset gate, ztIs to update the door, ytIs a status output, yt-1Is the state output at time t-1,
Figure BDA0003141594450000034
as candidate hidden states, xtFor source language input at time t, WzUpdating the door network weight, WzValue range of 0 to 1, WzA larger value means more memory for the current input information, WrReset gate network weights for pair xtAnd selecting important dimension information, and giving higher weight to important dimension input, wherein W is the network weight of an activation function tanh and is used for mapping the data into a range from-1 to 1.
The third concrete implementation mode: the machine translation model NMT based on the neural network is used for setting a sentence x as x at a given source end1,....,xlConditional pair target sentence y ═ y1,....,yJThe optimized conditional probability is as follows:
Figure BDA0003141594450000041
in the formula, xiRepresenting a single input word in the source language, l being the total length of the source-end sentence, yjCorresponding words for the target language, J is the length of the target language sentence, J is equal to [1, J ∈]θ is the parameter of the model, y < j is the translation context of j;
the probability P (y | x) in this embodiment defines a neural network based encoder-decoder framework.
The fourth concrete implementation mode: training the parameters of the NMT model to obtain real mark data
Figure BDA0003141594450000042
The likelihood estimate (likelihood) of (i) is:
Figure BDA0003141594450000043
wherein M is a corpus training data index value and takes a value of 1 to M, M is the total number of corpus training data entries, and M belongs to [1, M],ymIs the m-th target language, xmIs the mth source language sample;
the fifth concrete implementation mode: the overall countermeasure model for generating the countermeasure network is as follows:
assuming that the labeled data samples are from the data true probability distribution ptrueGenerating a data sample obedience learning probability distribution pgenThen, the overall generated countermeasure network objective function constructed is:
Figure BDA0003141594450000044
where G is the generator, D is the discriminator, D (x) P (y | x, θ) is the discriminator evaluating the probability that the data sample is authentic, (1-D (G (x)) is part of the benefit expectation of the discriminator, V (D, G) is the overall objective function, D (G (x)) is the probability estimate of the language generated by the discriminator to the generator;
when G is fixed, V (D, G) represents the benefit expectation of the arbiter and consists of log (D (x)) and log (1-D (G (x));
when D is fixed, V (D, G) represents the producer profit expectation, i.e., log (1-D (G (x))).
In this embodiment, for the discriminator, when the data is from the training data, the expectation for d (x) is larger; when the data is derived from the generator, the greater the expected (1-D (G (x))) gain for the arbiter, so from the arbiter perspective, the overall generation of the countermeasure network requires the maximization of the objective function V (D, G). From the generator perspective, the discriminators are spoofed with the resulting linguistic data to obtain a higher probability estimate D (G (x)). Therefore, from the perspective of the discriminator, the overall model needs to maximize the objective function V (D, G). In the training process, the generator and the discriminator carry out confrontation training until the model converges. When the model converges, the language data generated by the generator is difficult for the discriminator to distinguish whether the language data is generated by the generator or generated from the sampling of real training data samples.
The sixth specific implementation mode: the conditional probabilities for the generator target language y' optimization are as follows:
Figure BDA0003141594450000051
the above equation is the generator optimization objective, which should be maximized: adjusting a parameter eta by using input data and the optimization direction of r to maximize the probability of the generator target language;
where y ' is the generator target language, i.e., the corresponding medical text target language produced by source language x, η is the model parameter a, which is an indicator for generating translation y ', and r is the score assessed by the discriminator for y ' generated by the generator, i.e., the reward to the generator, ya' is any word in the translation target sentence.

Claims (10)

1. A deep confrontation generation method for solving the scarcity of medical text data is characterized by comprising the following specific processes: inputting a medical text source language into a trained generation countermeasure network GAN to obtain a target language;
the generating the antagonistic network GAN comprises: a generator and a discriminator;
a machine translation NMT model based on a neural network is adopted in the learning process of the discriminator, and an encoder-decoder model is adopted in the machine translation NMT model based on the neural network; the encoder is used for encoding a source language and inputting the finished code to the decoder; the decoder is used for converting the codes input by the encoder into a target language;
the machine translation NMT model is used to mark a source language from genuine data
Figure FDA0003141594440000011
Training to a target language, evaluating the automatically generated data, and judging whether the data is real training data;
wherein M is a corpus training data index value, the value is 1 to M, M is the total number of corpus training data entries, and XmIs a collection of training data, i.e., source languages, y is a discriminator target language, ymIs the mth target language;
the generator adopts a GRU network model and is used for encoding and decoding source language data to generate a generator target language y';
the generator takes the sparse text data as target data and takes the text data with rich linguistic data as a source language for training;
the generator target language y' performs timing information learning inside the GRU through a reset gate and an update gate.
2. The method of claim 1, wherein the method comprises: the trained generated countermeasure network GAN is obtained by:
inputting a source language x and a discriminator target language y into a discriminator, and obtaining the discrimination capability of the target language and the generated language through NMT model training; the source language x is used as the input of a generator G, the generator G generates a generator target language y ', the generator target language y ' is used as the input of a discriminator, the generated generator target language y ' is evaluated through the recognition of the discriminator, and the generator G obtains an award r; after iterative training, the training is finished to obtain the trained generated countermeasure network GAN until the discriminator can not judge that the input is from the real target language or is generated by the generator.
3. The method of claim 2, wherein the method comprises: when the generator target language y' performs time sequence information learning through the reset gate and the update gate in the GRU, the state output is as follows:
Figure FDA0003141594440000012
where σ is a sigmoid activation function, t denotes time, ztIs to update the door, yt-1Is the state output at time t-1, ytIs the output of the state at the time t,
Figure FDA0003141594440000013
is a candidate hidden state.
4. The method of claim 3, wherein the method comprises: the above-mentioned
Figure FDA0003141594440000021
Wherein r istIs a reset gate, xtFor the source language input at time t, W is the activation function tanh network weight.
5. The method of claim 4, wherein the method comprises: z ist=σ(Wz·[yt-1,xt]);
Wherein, WzThe door network weights are updated.
6. The method of claim 5, wherein the method comprises: said rt=σ(Wr·[yt-1,xt]);
Wherein, WrThe gate network weights are reset.
7. The method of claim 6, wherein the method comprises: the machine translation model NMT based on the neural network is used for setting a sentence x as x at a given source end1,....,xlConditional pair discriminator target language sentence y ═ y1,....,yJThe optimized conditional probability is as follows:
Figure FDA0003141594440000022
where θ is a parameter of the model, y<jIs the translation context of j, xiRepresenting a single input word in the source language, l being the total length of the sentence at the given source end, yjCorresponding words for the target language, J is the length of the target language sentence, J is equal to [1, J ∈]。
8. The method of claim 7, wherein the method comprises: training the parameters of the NMT model to obtain real mark data
Figure FDA0003141594440000023
The likelihood estimate of (d) is as follows:
Figure FDA0003141594440000024
wherein M is a corpus training data index value and takes a value of 1 to M, M is the total number of corpus training data entries, and M belongs to [1, M],ymIs the m-th discriminator target language, xmIs the mth source language sample.
9. The method of claim 8, wherein the method comprises: the objective function of generating the countermeasure network is as follows:
Figure FDA0003141594440000025
wherein the labeled data samples are derived from the data true probability distribution ptrueGenerating a data sample obedience learning probability distribution pgenG is the generator, D is the discriminator, D (x) is the discriminator to evaluate the probability that the data sample is true data, (1-D (G (x)) is a part of the benefit expectation of the discriminator, V (D, G) is the overall objective function, D (G (x)) is the probability estimation of the discriminator to the generator to generate language;
when G is fixed, V (D, G) represents the benefit expectation of the arbiter and consists of log (D (x)) and log (1-D (G (x));
when D is fixed, V (D, G) represents the producer profit expectation, i.e., log (1-D (G (x))).
10. The method of claim 9, wherein the method comprises: the optimized conditional probabilities of the generator target language y' are as follows:
Figure FDA0003141594440000031
where y 'is the generator target language, i.e., the corresponding target language produced by source language x, η is the model parameter, a is the index for generating translation y', ya' is any word in the translation target sentence.
CN202110741504.9A 2021-06-30 2021-06-30 Deep confrontation generation method for solving scarcity of medical text data Pending CN113642341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110741504.9A CN113642341A (en) 2021-06-30 2021-06-30 Deep confrontation generation method for solving scarcity of medical text data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741504.9A CN113642341A (en) 2021-06-30 2021-06-30 Deep confrontation generation method for solving scarcity of medical text data

Publications (1)

Publication Number Publication Date
CN113642341A true CN113642341A (en) 2021-11-12

Family

ID=78416448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741504.9A Pending CN113642341A (en) 2021-06-30 2021-06-30 Deep confrontation generation method for solving scarcity of medical text data

Country Status (1)

Country Link
CN (1) CN113642341A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368475A (en) * 2017-07-18 2017-11-21 中译语通科技(北京)有限公司 A kind of machine translation method and system based on generation confrontation neutral net
CN108829684A (en) * 2018-05-07 2018-11-16 内蒙古工业大学 A kind of illiteracy Chinese nerve machine translation method based on transfer learning strategy
CN110085215A (en) * 2018-01-23 2019-08-02 中国科学院声学研究所 A kind of language model data Enhancement Method based on generation confrontation network
CN110598221A (en) * 2019-08-29 2019-12-20 内蒙古工业大学 Method for improving translation quality of Mongolian Chinese by constructing Mongolian Chinese parallel corpus by using generated confrontation network
CN110993094A (en) * 2019-11-19 2020-04-10 中国科学院深圳先进技术研究院 Intelligent auxiliary diagnosis method and terminal based on medical images
CN112287117A (en) * 2020-10-30 2021-01-29 云南电网有限责任公司电力科学研究院 Asset management knowledge base construction method based on automatic data generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368475A (en) * 2017-07-18 2017-11-21 中译语通科技(北京)有限公司 A kind of machine translation method and system based on generation confrontation neutral net
CN110085215A (en) * 2018-01-23 2019-08-02 中国科学院声学研究所 A kind of language model data Enhancement Method based on generation confrontation network
CN108829684A (en) * 2018-05-07 2018-11-16 内蒙古工业大学 A kind of illiteracy Chinese nerve machine translation method based on transfer learning strategy
CN110598221A (en) * 2019-08-29 2019-12-20 内蒙古工业大学 Method for improving translation quality of Mongolian Chinese by constructing Mongolian Chinese parallel corpus by using generated confrontation network
CN110993094A (en) * 2019-11-19 2020-04-10 中国科学院深圳先进技术研究院 Intelligent auxiliary diagnosis method and terminal based on medical images
CN112287117A (en) * 2020-10-30 2021-01-29 云南电网有限责任公司电力科学研究院 Asset management knowledge base construction method based on automatic data generation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JAKUB LANGR等: "《Gans in action: Deep learning with Generative Adversasial Networks》", 8 October 2019 *

Similar Documents

Publication Publication Date Title
CN109359294B (en) Ancient Chinese translation method based on neural machine translation
CN109472024B (en) Text classification method based on bidirectional circulation attention neural network
CN111738007B (en) Chinese named entity identification data enhancement algorithm based on sequence generation countermeasure network
CN109960804B (en) Method and device for generating topic text sentence vector
CN111222340B (en) Breast electronic medical record entity recognition system based on multi-standard active learning
US20220067307A1 (en) System and method for training multilingual machine translation evaluation models
CN110991190B (en) Document theme enhancement system, text emotion prediction system and method
CN111414770B (en) Semi-supervised Mongolian neural machine translation method based on collaborative training
CN110322959B (en) Deep medical problem routing method and system based on knowledge
CN115293128A (en) Model training method and system based on multi-modal contrast learning radiology report generation
CN111368082A (en) Emotion analysis method for domain adaptive word embedding based on hierarchical network
CN115062104A (en) Knowledge prompt-fused legal text small sample named entity identification method
CN110298044A (en) A kind of entity-relationship recognition method
CN113723103A (en) Chinese medical named entity and part-of-speech combined learning method integrating multi-source knowledge
CN111291558B (en) Image description automatic evaluation method based on unpaired learning
CN115130465A (en) Method and system for identifying knowledge graph entity annotation error on document data set
CN115345165A (en) Specific entity identification method oriented to label scarcity or distribution unbalance scene
CN114757188A (en) Standard medical text rewriting method based on generation of confrontation network
Elbedwehy et al. Efficient Image Captioning Based on Vision Transformer Models.
CN113536799A (en) Medical named entity recognition modeling method based on fusion attention
CN112989803A (en) Entity link model based on topic vector learning
CN115964475A (en) Dialogue abstract generation method for medical inquiry
CN113642341A (en) Deep confrontation generation method for solving scarcity of medical text data
CN116151260A (en) Diabetes named entity recognition model construction method based on semi-supervised learning
CN114692615A (en) Small sample semantic graph recognition method for small languages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 5-404, floor 5, Yunxi Valley Digital Industrial Park, No. 168, Youxing Road, Xiangzhou District, Zhuhai City, Guangdong Province (block B, Meixi Commercial Plaza) (centralized office area)

Applicant after: Shenyi information technology (Zhuhai) Co.,Ltd.

Address before: 519031 room 409, building 18, Hengqin Macao Youth Entrepreneurship Valley, No. 1889, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant before: Shenyi information technology (Hengqin) Co.,Ltd.