CN111881667B - Sensitive text auditing method - Google Patents

Sensitive text auditing method Download PDF

Info

Publication number
CN111881667B
CN111881667B CN202010722574.5A CN202010722574A CN111881667B CN 111881667 B CN111881667 B CN 111881667B CN 202010722574 A CN202010722574 A CN 202010722574A CN 111881667 B CN111881667 B CN 111881667B
Authority
CN
China
Prior art keywords
feature
network
text
semantic
word segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010722574.5A
Other languages
Chinese (zh)
Other versions
CN111881667A (en
Inventor
汪洋
武志彦
邓明通
杨梦玲
王康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fengshuo Technology Co ltd
Original Assignee
Shanghai Fengshuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fengshuo Technology Co ltd filed Critical Shanghai Fengshuo Technology Co ltd
Priority to CN202010722574.5A priority Critical patent/CN111881667B/en
Publication of CN111881667A publication Critical patent/CN111881667A/en
Application granted granted Critical
Publication of CN111881667B publication Critical patent/CN111881667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a sensitive text auditing method, which comprises a preprocessing module, wherein the preprocessing module converts expression symbols in an original text, and the expression symbols are divided into three levels according to positive, negative, neutral and are respectively mapped into three different token; the word segmentation module is used for loading a characteristic keyword dictionary and constructing a word segmentation device; the feature extraction module is used for analyzing the sample data, constructing a feature tree, traversing word segmentation results and matching the attribution slot position of each token in the feature tree; and a predictive network. According to the invention, the dictionary word segmentation method is adopted to sort out new words created in the comments from the text, the influence of the new words on the category judgment is expressed through the feature slot, the model can be automatically compatible without retraining, the application range is wider, the detection rate is higher, the new words in the network comment data are not limited, and more sensitive contents can be detected.

Description

Sensitive text auditing method
Technical Field
The invention relates to an auditing method, in particular to a sensitive text auditing method.
Background
The large network platforms of the Internet provide free and equal comments for netizens, but abuse of the comments develops network violence, especially in the places such as the politics and forums, and some topics finally lead to the curse fights deviating from the article theme, and manual auditing is not feasible for the unscrupulous and unhealthy comments, so that sensitive text auditing technology is adopted for screening related contents such as the top of today and microblogs in most network platform companies and enterprise units at present.
The bottom layer capability of sensitive text auditing products proposed by the industry is mostly constructed by using text classification technology in NLP, the word and word vector are mostly constructed based on training data, new words created by means of homonyms, references and the like in network comments are not covered by training data and are large probability events, the trained word and word vector cannot dynamically adapt to changes, and the model needs retraining to ensure auditing effects, so that the cost is high.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a sensitive text auditing method.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention discloses a sensitive text auditing method, which comprises the following steps:
the preprocessing module converts the expression type symbols in the original text, performs three-level division according to positive, negative, neutral and maps the expression type symbols into three different token;
the word segmentation module is used for loading a characteristic keyword dictionary, constructing a word segmentation device, reading a text to be segmented, and completing initialization and comparison of the text;
the feature extraction module is used for analyzing sample data, constructing a feature tree, traversing word segmentation results, matching the attribution slot position of each token in the feature tree, adding a feature list and finishing the feature expression process of the text sequence;
and a prediction network including a semantic network for a constructed text classification method, a feature network for capturing feature information using a multi-layered bi-directional LSTM network, a fusion network for dynamically modifying feature values for classification according to circumstances, and an enhancement loss.
Preferably, the preprocessing module removes meaningless noise data in the original text, adopts a forward thinking method to reserve Chinese characters, english, numbers and specific symbols in the text, and other characters are regarded as noise to execute elimination.
Preferably, the feature extraction module statistically analyzes the influence degree of verbs, graduated words, numbers, references and the like in the text on the category judgment, and constructs a hierarchical structure of the feature tree according to the influence degree.
Preferably, the word segmentation device is constructed by traversing all words in the dictionary to be ROOT nodes and NULL as end nodes.
Preferably, the text is initialized and compared as follows:
(1) reading a text W to be segmented, and finishing initialization of a cursor object;
(2) moving the cursor to judge whether the overflow exists;
(3) inquiring whether the value is in the child node.
Preferably, the algorithm of the semantic network comprises the following steps:
1) Characterizing the text sequence by adopting a character embedding (character-level embedding) method to obtain a vectorized standard sequence;
2) Sequentially inputting the feature vectors into subsequent convolution blocks for convolution to extract text information;
3) The pooling layer at the end of the network model receives the input of the last convolution block, and finally obtains the feature vector representing the text.
Preferably, the specific process of the feature network is as follows:
1) Taking the characteristics extracted in the step 2 as network input;
2) Sending the feature semantic vector V into a Bi-directional LSTM network to calculate the feature semantic vector V:
3) The result is calculated as the final feature semantic vector.
Preferably, the specific operation process of the converged network is as follows:
1) Applying an attention mechanism, designing a global semantic vector U to calculate weights of a semantic network and a feature network;
2) Aggregating the outputs of the semantic network and the feature network to generate a semantic vector V representing the whole text;
3) The text semantic vector V is fed into the classification network and takes negative log likelihood as a penalty.
The beneficial effects of the invention are as follows: according to the sensitive text auditing method, the dictionary word segmentation method is adopted to sort out new words created in comments from texts, the influence of the new words on category judgment is expressed through the feature slot, the models can be automatically compatible without retraining, the application range is wider, the detection rate is higher, the method is not limited by the new words in the network comment data, and more sensitive contents can be detected.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a technical flow diagram of a sensitive text auditing method of the present invention;
FIG. 2 is a suffix tree schematic diagram of a sensitive text auditing method of the present invention;
FIG. 3 is a schematic diagram of a network architecture diagram of a sensitive text auditing method of the present invention;
FIG. 4 is a schematic diagram of a DPCNN algorithm model of a sensitive text auditing method of the present invention;
FIG. 5 is a detailed parametric diagram of a convolution block of a sensitive text review method of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Examples: as shown in fig. 1-4, the sensitive text auditing method of the present invention includes:
1. pretreatment module
a) Performing conversion processing on expression symbols in the original text, performing 3-level division according to positive, negative, neutral, and mapping the expression symbols into 3 different token;
b) Removing meaningless noise data in an original text, reserving Chinese characters, english, numbers and specific symbols in the text by adopting a forward thinking method, and eliminating other characters as noise;
2. word segmentation module
The word segmentation model adopts the longest dictionary matching method, and compared with a Markov family model, a CRF model and a neural network model, the method has the characteristics of higher accuracy, high efficiency and strong expansibility, and can quickly respond to external feedback (such as manually maintaining and inputting new words, actively adding a vocabulary blacklist and the like).
a) Loading a characteristic keyword (referring word) dictionary, and constructing a word segmentation device:
1) Traversing all vocabulary in the dictionary to construct a suffix tree for a root node and for a final node. Examples: a suffix tree structure as shown in fig. 2 constructed by a dictionary composed of four words of { true, definite, rational };
b) Reading the text to be segmented, and completing the initialization of the following objects
1) The matched word list is set to be an empty list
2) The cursor points to the initial position 0
3) The current buffer start cursor points to start position 0
4) The suffix tree pointer points to the root node (), and the latest matching cursor is set as
c) Moving the cursor, judging whether overflows (end point):
1) When overflowed:
i. if not, adding the substring from (closed) to (open) into the substring, and setting the substring as;
the value from the start of the extracted position (closed) to the end position (closed), if not empty, will be added to it;
ending the flow.
2) And (d) when the overflow is not performed.
d) Inquiring whether the value is in the child node:
1) If it is:
i. the child node corresponding to the pointing value is updated, and if the child node is in the child node, the child node is set as the child node;
pointing at the point.
2) If not, the method comprises the following steps:
if not, adding the substring from (closed) to (open) to the points, and setting the substring to point to a root node ();
if so, the value of the fetched position is added to the point, and the root node ().
e) Jump to step (c).
3. Feature extraction
a) Analyzing the sample data to construct a feature tree: statistically analyzing the influence degree of verbs, graduated words, digital words, references and the like in the text on category judgment, and constructing a hierarchical structure of the feature tree according to the influence degree;
b) Traversing word segmentation results, matching attributive slots of each token in a feature tree, adding a feature list, and completing a feature expression process of a text sequence;
4. predictive network
The network structure diagram is shown in fig. 3;
a) Semantic network
The semantic network adopts a text classification method constructed based on a deep learning technology, which is used for grasping the topic of a text from a semantic level, and adopts a DPCNN algorithm (ALBert is also effective) in the implementation of the scheme, and an algorithm model is shown in figure 4;
1) Converting word embedding (character-level embedding) into region embedding, covering one or more words, and constructing a serialized text representation;
2) In order to capture long-distance complex modes and ensure calculation efficiency, DPCNN introduces two layers of convolution layers (each convolution kernel has the function of compressing each word position of an input sequence and the context information of left and right (n-1)/2 words into the ebedding of the word position) so as to improve the richness of the word position ebedding representation;
3) After obtaining the vector representation of the text, sequentially inputting the feature vectors into a subsequent convolution block to carry out convolution so as to extract text information, wherein the detailed parameters of the convolution block are shown in fig. 5;
4) The shortcut residual error connection from input to output is formed between each convolution block, and the shortcut can greatly relieve the gradient vanishing problem in back propagation along with the continuous deepening of the network layer number;
5) The pooling layer at the end of the network model receives the input of the last convolution block (max pooling is adopted by all pooling layers in the model), and finally, the feature vector representing the text is obtained.
b) Feature network
The characteristic network adopts a multi-layer (3 layers are actually adopted) bidirectional LSTM network to capture characteristic information, and the specific process is as follows:
1) Taking the features extracted in the step 3 as network input, taking the features as unbedding, and taking the first feature unbedding as e (f) i );
2) Sending the data to a Bi-directional LSTM network, wherein the calculation mode of the first layer is as follows:
3) The final layer (layer) calculation result is taken as the final characteristic semantic vector
c) Converged network
The fusion network is used for fusing the values output by the semantic network and the feature network, dynamically correcting the feature values for classification according to the situation, and comprises the following specific processes:
1) Applying an attention mechanism, designing a global semantic vector to calculate weights of a semantic network and a feature network, wherein the calculation mode is as follows:
(all scalar) represents the importance degree of the feature value output by the semantic network and the feature network to the final classification respectively;
2) Aggregating the outputs of the semantic network and the feature network to generate semantic vectors representing the whole text, wherein the information from the semantic network and the feature network is aggregated at the moment:
3) Feeding the text semantic vector into a classification network:
p t =softmax(W t v'+b t ) (8)
and adopts negative log likelihood (negative log likelihood) as loss:
d) Enhancement loss
Equation (8) is a predictive label of the converged network, and the semantic network and the feature network at this time can also be connected with a classifier:
p s =softmax(W s v s +b s )(10)
p f =softmax(W f v f +b f ) (11)
also, negative log likelihood (negative log likelihood) was used as a penalty, and there were:
at this time, the supervision signals of the model come from the fusion network, the semantic network and the feature network also provide valuable supervision signals, and the total loss function can be designed as follows:
L total =λ semantic L semanticfeature L featurefusion L fusion (14)
wherein lambda is semantic 、λ feature 、λ fusion Is a super parameter, is used for reconciling weights of semantic network, feature network and fusion network, and meets constraint conditions:
λ semanticfeaturefusion =1 (15)
finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A method for sensitive text auditing, comprising:
the preprocessing module converts the expression type symbols in the original text, performs three-level division according to positive, negative, neutral and maps the expression type symbols into three different token;
the word segmentation module is used for loading a characteristic keyword dictionary, constructing a word segmentation device, reading a text to be segmented, and completing initialization and comparison of the text;
the feature extraction module is used for analyzing sample data, constructing a feature tree, traversing word segmentation results, matching the attribution slot position of each token in the feature tree, adding a feature list and finishing the feature expression process of the text sequence;
the prediction network comprises a semantic network for constructing a text classification method, a feature network for capturing feature information by adopting a multi-layer bidirectional LSTM network, a fusion network for dynamically correcting feature values for classification according to situations and an enhancement loss;
the algorithm of the semantic network comprises the following steps:
1) Characterizing the text sequence by adopting a word embedding method to obtain a vectorized standard sequence;
2) Sequentially inputting the feature vectors into subsequent convolution blocks for convolution to extract text information;
3) The pooling layer at the tail end of the network model receives the input of the last convolution block, and finally obtains the feature vector representing the text;
the specific process of the characteristic network is as follows:
1) Taking the characteristics extracted in the step 2 as network input;
2) Sending the feature semantic vector V into a Bi-directional LSTM network to calculate the feature semantic vector V:
3) Calculating a result as a final feature semantic vector;
the specific operation process of the converged network is as follows:
1) Applying an attention mechanism, designing a global semantic vector U to calculate weights of a semantic network and a feature network;
2) Aggregating the outputs of the semantic network and the feature network to generate a semantic vector V representing the whole text;
3) The text semantic vector V is fed into the classification network and takes negative log likelihood as a penalty.
2. The method according to claim 1, wherein the preprocessing module removes meaningless noise data from the original text, and uses a forward thinking method to preserve Chinese characters, english, numerals, and specific symbols in the text, and other characters are regarded as noise to perform elimination.
3. The method for auditing sensitive texts according to claim 1, wherein the feature extraction module statistically analyzes verbs, graduated words, digital words in texts, refers to the influence degree on category judgment, and constructs a hierarchical structure of a feature tree according to the influence degree.
4. The method of claim 1, wherein the word segmentation unit is configured by traversing all words in the dictionary to be ROOT nodes and NULL nodes.
5. The method for auditing sensitive texts according to claim 1, wherein the initializing and comparing of the texts is as follows:
(1) reading a text W to be segmented, and finishing initialization of a cursor object;
(2) moving the cursor to judge whether the overflow exists;
(3) inquiring whether the value is in the child node.
CN202010722574.5A 2020-07-24 2020-07-24 Sensitive text auditing method Active CN111881667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010722574.5A CN111881667B (en) 2020-07-24 2020-07-24 Sensitive text auditing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010722574.5A CN111881667B (en) 2020-07-24 2020-07-24 Sensitive text auditing method

Publications (2)

Publication Number Publication Date
CN111881667A CN111881667A (en) 2020-11-03
CN111881667B true CN111881667B (en) 2023-09-29

Family

ID=73201597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010722574.5A Active CN111881667B (en) 2020-07-24 2020-07-24 Sensitive text auditing method

Country Status (1)

Country Link
CN (1) CN111881667B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723095A (en) * 2020-12-16 2021-11-30 北京沃东天骏信息技术有限公司 Text auditing method and device, electronic equipment and computer readable medium
CN112989817B (en) * 2021-05-11 2021-08-27 中国气象局公共气象服务中心(国家预警信息发布中心) Automatic auditing method for meteorological early warning information
CN116028750B (en) * 2022-12-30 2024-05-07 北京百度网讯科技有限公司 Webpage text auditing method and device, electronic equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519970A (en) * 2018-02-06 2018-09-11 平安科技(深圳)有限公司 The identification method of sensitive information, electronic device and readable storage medium storing program for executing in text
CN108647309A (en) * 2018-05-09 2018-10-12 达而观信息科技(上海)有限公司 Chat content checking method based on sensitive word and system
CN111259141A (en) * 2020-01-13 2020-06-09 北京工业大学 Social media corpus emotion analysis method based on multi-model fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250311B2 (en) * 2017-03-15 2022-02-15 Salesforce.Com, Inc. Deep neural network-based decision network
US10789430B2 (en) * 2018-11-19 2020-09-29 Genesys Telecommunications Laboratories, Inc. Method and system for sentiment analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519970A (en) * 2018-02-06 2018-09-11 平安科技(深圳)有限公司 The identification method of sensitive information, electronic device and readable storage medium storing program for executing in text
CN108647309A (en) * 2018-05-09 2018-10-12 达而观信息科技(上海)有限公司 Chat content checking method based on sensitive word and system
CN111259141A (en) * 2020-01-13 2020-06-09 北京工业大学 Social media corpus emotion analysis method based on multi-model fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
网页敏感词过滤与敏感文本分类系统设计;李伟;《电脑知识与技术》;第16卷(第8期);第245-247页 *

Also Published As

Publication number Publication date
CN111881667A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN111881667B (en) Sensitive text auditing method
CN108388651B (en) Text classification method based on graph kernel and convolutional neural network
CN107526785B (en) Text classification method and device
CN112084327B (en) Classification of sparsely labeled text documents while preserving semantics
CN110321563B (en) Text emotion analysis method based on hybrid supervision model
CN109871955B (en) Aviation safety accident causal relation extraction method
CN109214006B (en) Natural language reasoning method for image enhanced hierarchical semantic representation
CN107025284A (en) The recognition methods of network comment text emotion tendency and convolutional neural networks model
CN113255320A (en) Entity relation extraction method and device based on syntax tree and graph attention machine mechanism
CN114757182A (en) BERT short text sentiment analysis method for improving training mode
CN108647206B (en) Chinese junk mail identification method based on chaos particle swarm optimization CNN network
CN112905795A (en) Text intention classification method, device and readable medium
Zhang Research on text classification method based on LSTM neural network model
CN113392209A (en) Text clustering method based on artificial intelligence, related equipment and storage medium
CN112989052B (en) Chinese news long text classification method based on combination-convolution neural network
CN111930892B (en) Scientific and technological text classification method based on improved mutual information function
CN109766523A (en) Part-of-speech tagging method and labeling system
CN113255360A (en) Document rating method and device based on hierarchical self-attention network
CN114528374A (en) Movie comment emotion classification method and device based on graph neural network
CN113609849A (en) Mongolian multi-mode fine-grained emotion analysis method fused with priori knowledge model
CN111523319B (en) Microblog emotion analysis method based on scene LSTM structure network
CN114547299A (en) Short text sentiment classification method and device based on composite network model
CN113761868A (en) Text processing method and device, electronic equipment and readable storage medium
CN116992040A (en) Knowledge graph completion method and system based on conceptual diagram
Lubis et al. spelling checking with deep learning model in analysis of Tweet data for word classification process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230818

Address after: Room 1104, 11th Floor, Section E, No. 1515 Zhongshan North 2nd Road, Hongkou District, Shanghai, 200080

Applicant after: Shanghai Fengshuo Technology Co.,Ltd.

Address before: 210019 26F, building a, Fenghuo science and technology building, 88 yunlongshan Road, Jianye District, Nanjing City, Jiangsu Province

Applicant before: NANJING FIBERHOME TELECOMMUNICATION TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant