CN113536810A - Method for realizing corpus intention discovery by unsupervised algorithm - Google Patents
Method for realizing corpus intention discovery by unsupervised algorithm Download PDFInfo
- Publication number
- CN113536810A CN113536810A CN202110594293.0A CN202110594293A CN113536810A CN 113536810 A CN113536810 A CN 113536810A CN 202110594293 A CN202110594293 A CN 202110594293A CN 113536810 A CN113536810 A CN 113536810A
- Authority
- CN
- China
- Prior art keywords
- vector
- algorithm
- corpus
- machine learning
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention belongs to the field of intention discovery based on a machine learning algorithm, and particularly discloses a method for realizing corpus intention discovery by an unsupervised algorithm, which comprises the following steps: further pre-training the deep learning pre-training model BERT based on the data on the business; collecting user-generated conversation records on the project; obtaining an embedding vector by using a BERT model which is further pre-trained for a dialog text of a user; reducing the dimension of the embedding vector to a low-dimensional vector by using a machine learning algorithm, so that the embedding vector is a vector with representative characteristic information; using a machine learning algorithm for the reduced imbedding vector, and adjusting relevant hyper-parameters of the algorithm to obtain clustered text information; and (4) giving the clustered linguistic data to an intention library maintenance personnel, and providing reference for adding new intentions for the personnel. The invention adopts machine learning technology, realizes the discovery of new intentions by using AI algorithm, and provides reference for adding new intentions for the maintenance personnel of the intention database.
Description
Technical Field
The invention relates to the field of intention discovery based on an algorithm, in particular to a method for realizing corpus intention discovery by an unsupervised algorithm.
Background
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for making computers have intelligence, and is applied to all fields of artificial intelligence, mainly using induction, synthesis rather than deduction. Machine learning is a science of letting computers perform activities under imprecise programming. Over the past decade, machine learning has contributed to the rapid development of unmanned vehicles, efficient speech recognition, accurate web search, and human genome recognition.
The intention recognition of text corpora is widely applied to the fields of search engines, query recognition, man-machine conversation and the like. The traditional corpus intention recognition and discovery method based on machine learning at present has the defects of low accuracy, incapability of automatically determining intention categories and the like.
Disclosure of Invention
The invention aims to provide a method for realizing corpus intention discovery by an unsupervised algorithm so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a method for realizing corpus intention discovery by an unsupervised algorithm comprises the following steps:
s1: further pre-training the deep learning pre-training model BERT based on the data on the business;
s2: collecting user-generated conversation records on the project;
s3: obtaining an embedding vector by using a BERT model which is further pre-trained for a dialog text of a user;
s4: reducing the dimension of the embedding vector to a low-dimensional vector by using a machine learning algorithm, so that the embedding vector is a vector with representative characteristic information;
s5: using a machine learning algorithm for the reduced imbedding vector, and adjusting relevant hyper-parameters of the algorithm to obtain clustered text information;
s6: and (4) giving the clustered linguistic data to an intention library maintenance personnel, and providing reference for adding new intentions for the personnel.
Preferably, S1 specifically includes the steps of: s1 a: acquiring a large amount of corpus texts obtained by customer service operation; s1 b: performing data cleaning on the corpus text, wherein the data cleaning process comprises corpus special character removal, space character removal and character code conversion; s1 c: and feeding the cleaned business corpus into a deep learning BERT model which is pre-trained on the basis of a large amount of general-purpose domain corpus, performing pre-training operation, and obtaining the BERT pre-training model based on a business scene after the pre-training is finished.
Preferably, S3 is a sequence embedding operation process, that is, the BERT model based on further pre-training in a business scenario is a converter of Sentence steering amount, and token vectors at CLS positions of the BERT output layer are used as Sentence vectors of the text of the input Sentence, that is, the best method for BERT to extract Sentence vectors is to take the average of all token vectors at the first and last layers of BERT, and take the vector as a Sentence vector of the text of the input Sentence.
Preferably, in S4, the dimension reduction of the embedding vector to the low-dimensional vector by using the machine learning algorithm is specifically to perform the dimension reduction operation on the text sentence vector obtained in S3 by using the PCA principal component analysis algorithm.
Preferably, in S5, the machine learning algorithm is used for the reduced-dimension imbedding vector, specifically, the DBSCAN clustering algorithm is used to perform clustering processing on the low-dimension sentence vectors, that is, the corpora corresponding to the vectors in the same cluster are the corpora of one category.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a method for realizing corpus intention discovery by an unsupervised algorithm, which solves the problem that the intention frequently consulted by a user in the AI services such as text service and outbound robot in the prior art does not exist in an intention library of the robot service. And a machine learning technology is adopted, and an AI algorithm is utilized to realize the discovery of new intentions, so that reference is provided for adding new intentions for the maintenance personnel of the intention database.
Drawings
FIG. 1 is a block flow diagram of the present invention;
fig. 2 is a visualization display diagram after dimension reduction in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method for realizing corpus intention discovery by an unsupervised algorithm comprises the following steps:
s1: further pre-training the deep learning pre-training model BERT based on the data on the business;
s2: collecting user-generated conversation records on the project;
s3: obtaining an embedding vector by using a BERT model which is further pre-trained for a dialog text of a user;
s4: reducing the dimension of the embedding vector to a low-dimensional vector by using a machine learning algorithm, so that the embedding vector is a vector with representative characteristic information;
s5: using a machine learning algorithm for the reduced imbedding vector, and adjusting relevant hyper-parameters of the algorithm to obtain clustered text information;
s6: and (4) giving the clustered linguistic data to an intention library maintenance personnel, and providing reference for adding new intentions for the personnel.
In this embodiment, S1 specifically includes the steps of:
s1 a: acquiring a large amount of corpus texts obtained by customer service operation;
s1 b: performing data cleaning on the corpus text, wherein the data cleaning process comprises corpus special character removal, space character removal and character code conversion;
s1 c: and feeding the cleaned business corpus into a deep learning BERT model which is pre-trained on the basis of a large amount of general-purpose domain corpus, performing pre-training operation, and obtaining the BERT pre-training model based on a business scene after the pre-training is finished.
In this embodiment, the pretraining operation of the BERT Model is designed with 2 tasks, which are mask Language Model (mask training) and Next sequence Prediction (Next Sentence Prediction training), respectively. Mask Language Model of pre-training task (mask training): the Bert performs Masked operation (masking) on original text data, 15% random masking is adopted (of the 15% words, 80% of the words are directly set as Mask, 10% of the original values are randomly replaced by some other word), each text is repeated for 10 times, then the Masked text is input into a Bert model, feature vectors of positions at various moments are extracted, and finally the Masked words are predicted based on the feature vectors at various moments.
By the structure, the model can predict what the covered words are according to the context. If the model is trained well, that is, when the loss function is the minimum, it means that the Word which is currently masked can be predicted based on the context, that means that the feature vector extraction for each Word of the context is correct, and there is a CBOW structure which is a little similar to Word2 Vec.
The masking operation in pre-training is illustrated: original text Harry Potter is a series of fantasy novels writen by British author J.K.Rowling; randomly masking the original text to generate X and Y:
X:[mask]Potter is a series[mask]fantasy novels[mask]by British author J.[mask]Rowling;
y: Harry, of, writen, K. (only four mask positions participate in the model's loss function construction, other positions do not).
Next sequence Prediction of the pre-training task (Next Sentence Prediction training): and the CLS of the BERT corresponds to classification output, and judges whether the sentence A is constructed after the sentence B or not to construct a pre-training loss function.
In this embodiment, S3 is a sequence embedding operation process, that is, a BERT model pre-trained in a service scene is a converter of Sentence steering amount, token vectors at CLS positions of BERT output layers are used as Sentence vectors of input Sentence texts, and a best method for BERT to extract Sentence vectors is to average all token vectors at the first and last layers of BERT, and use the vector as a Sentence vector of the input Sentence text.
In this embodiment, in S4, the dimension reduction of the embedding vector to the low-dimensional vector by using the machine learning algorithm is specifically to perform the dimension reduction operation on the text sentence vector obtained in S3 by using the PCA principal component analysis algorithm.
In this embodiment, the principal component analysis algorithm of PCA mainly considers that n-dimensional features are mapped to k-dimensions, which are completely new orthogonal features and are also called principal components, and k-dimensional features reconstructed on the basis of the original n-dimensional features, the algorithm principle is matrix decomposition, and dense distance-significant low-dimensional sentence vectors are obtained by dimension reduction.
Namely, the PCA algorithm is realized based on the SVD decomposition covariance matrix:
1) mean value removal, i.e. subtracting the respective mean value from each bit feature;
2) calculating a covariance matrix;
3) calculating an eigenvalue and an eigenvector of the covariance matrix through SVD;
4) sorting the eigenvalues from large to small, selecting the largest k eigenvectors, and taking the corresponding k eigenvectors as column vectors to form an eigenvector matrix;
5) the data is transformed into a new space constructed by k feature vectors.
In this embodiment, in S5, the machine learning algorithm is specifically used for clustering low-dimensional sentence vectors by using a DBSCAN clustering algorithm, that is, the corpora corresponding to the vectors in the same cluster are the corpora of one category.
In this embodiment, because it is impossible to know in advance how many classes these corpora can be classified into in actual Clustering, the Clustering algorithm selects DBSCAN Clustering, and the DBSCAN (Density-Based Spatial Clustering of Application with Noise) algorithm is a typical Clustering method Based on Density. The method defines clusters as the maximum set of points connected by density, can divide areas with sufficient density into clusters, can find clusters with any shapes in a noisy spatial data set, and carries out clustering processing on low-dimensional sentence vectors in S4 through algorithm-related hyper-parameter debugging, wherein corpora corresponding to vectors in the same clusters are corpora of a category.
DBSCAN is a density-based clustering algorithm that generally assumes that classes can be determined by how closely the samples are distributed. Samples of the same class are closely related, i.e., samples of the same class must exist a short distance around any sample of the class. By classifying closely connected samples into one class, a cluster class is obtained. By classifying all groups of closely connected samples into different categories, we obtain the final results of all the clustering categories. DBSCAN is based on a set of neighborhoods to describe how closely a sample set is, and a parameter (e, MinPts) is used to describe how closely a neighborhood's sample distribution is. Where e describes the neighborhood distance threshold for a sample, and MinPts describes the threshold for the number of samples in the neighborhood for which the distance of a sample is e.
Assuming my sample set is D ═ (x1, x 2.., xm), the specific density description of DBSCAN is defined as follows:
1) e-neighborhood: for xj ∈ D, its ∈ -neighborhood contains a subsample set whose distance from xj is not greater than ∈ in the sample set D, i.e., N ∈ (xj) { xi ∈ D | distance (xi, xj) ≦ ∈ }, and the number of this subsample set is denoted as | N ∈ (xj) |;
2) core object: for any sample xj belongs to D, if N belongs to (xj) corresponding to the belonging-neighborhood at least contains MinPts samples, namely if | N belongs to (xj) | ≧ MinPts, xj is a core object;
3) the density is up to: if xi is located in the e-neighborhood of xj, and xj is the core object, then xi is said to be directed by the xj density. Note that the opposite is not necessarily true, i.e., it cannot be said that xj is passed through by xi density at this time, unless xi is also the core object;
4) the density can reach: for xi and xj, xj is said to be reachable by xi density if there are sample sequences p1, p 2. That is, the density can be achieved to satisfy transitivity. Now the transfer samples p1, p 2.. the pT-1 in the sequence are all core objects, since only core objects can make the other sample densities go through. Note that the density can be reached or does not satisfy the symmetry, which can be derived from the asymmetry of the direct density;
5) density connection: for xi and xj, if there is a core object sample xk, making both xi and xj reachable by xk density, xi and xj are said to be connected in density. Note that the density connectivity is such that symmetry is satisfied.
Compared with the traditional K-Means algorithm, the biggest difference of DBSCAN is that the class number K does not need to be input, and the biggest advantage of DBSCAN is that cluster clusters with arbitrary shapes can be found.
In order to visually display the effect after the fourth step, the dimensionality is reduced to three dimensions, and then the three dimensions are visually displayed as shown in fig. 2, each point in the graph is a corpus vector, and the more the point is close, the more likely the corpus is represented as a category.
The following example shows the effect after corpus clustering.
1===============================
[' hello, can you hear? ']
' i hear o, can you hear i not hear? ']
Can be acorn. ']
[' to hear said. ']
Can be heard. ']]
2===============================
[' how much guarantee gold? ']
[' do you not want to call me to hand guaranteed gold? ']
[' how much guarantee money you have to deal? ']
If' ten thousand money is paid, the deposit is too many to be held in the bar, and is not generally held in the bar. ']
One for three with uncrossed deposits. ']
A person could not help me to settle a gold deposit. ']
' do not pay, what does not pay good consideration, do not reserve money, i can go on duty immediately. ']
Ten thousand pieces of money are also paid to ensure gold. ']
3===============================
[' does not have a role in faculty. ']
[' can he do part of the double job? ']
"hiccup, where you function at the same time. ']
4===============================
[' how do i own cars? ']
[' do I say that I have a car for oneself? ']
That is to say, must use your own company's car, is? ']
' could do it with his own car? ']
' how do I have cars? ']
' I have a car to join in to own bring car. ']
[' to, since i have a car by oneself. ']
"my car is so. ']
[' so we can have a car by themselves? ']
[ 'do i's own vehicle travel? ']
[' I have car here. ']
I have a car. ']
' own car, do not need your car. ']
'I but not I's car is that you give you, I is also buying a car by oneself. ']
[' but i have a car by oneself. ']]
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (5)
1. A method for realizing corpus intention discovery by an unsupervised algorithm is characterized by comprising the following steps:
s1: further pre-training the deep learning pre-training model BERT based on the data on the business;
s2: collecting user-generated conversation records on the project;
s3: obtaining an embedding vector by using a BERT model which is further pre-trained for a dialog text of a user;
s4: reducing the dimension of the embedding vector to a low-dimensional vector by using a machine learning algorithm, so that the embedding vector is a vector with representative characteristic information;
s5: using a machine learning algorithm for the reduced imbedding vector, and adjusting relevant hyper-parameters of the algorithm to obtain clustered text information;
s6: and (4) giving the clustered linguistic data to an intention library maintenance personnel, and providing reference for adding new intentions for the personnel.
2. The method according to claim 1, wherein said S1 specifically comprises the steps of:
s1 a: acquiring a large amount of corpus texts obtained by customer service operation;
s1 b: performing data cleaning on the corpus text, wherein the data cleaning process comprises corpus special character removal, space character removal and character code conversion;
s1 c: and feeding the cleaned business corpus into a deep learning BERT model which is pre-trained on the basis of a large amount of general-purpose domain corpus, performing pre-training operation, and obtaining the BERT pre-training model based on a business scene after the pre-training is finished.
3. The method of claim 1, wherein the step S3 is a sequence embedding operation procedure, that is, a BERT model based on further pre-training in a business scenario is a converter of Sentence steering quantity, and a token vector of CLS position of BERT output layer is used as a Sentence vector of text of the input Sentence.
4. The method for realizing corpus intent discovery by unsupervised algorithm according to claim 1, wherein the dimension reduction of the embedding vector to the low-dimensional vector in S4 by using the machine learning algorithm is specifically to perform the dimension reduction operation on the text sentence vector obtained in S3 by the PCA principal component analysis algorithm.
5. The method for realizing corpus intention according to claim 1, wherein in S5, a machine learning algorithm is used for the dimensionality reduced embedding vectors, specifically, a DBSCAN clustering algorithm is used for clustering low-dimensional sentence vectors, that is, corpora corresponding to vectors in the same cluster are corpora of one category.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110594293.0A CN113536810A (en) | 2021-05-28 | 2021-05-28 | Method for realizing corpus intention discovery by unsupervised algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110594293.0A CN113536810A (en) | 2021-05-28 | 2021-05-28 | Method for realizing corpus intention discovery by unsupervised algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113536810A true CN113536810A (en) | 2021-10-22 |
Family
ID=78095039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110594293.0A Withdrawn CN113536810A (en) | 2021-05-28 | 2021-05-28 | Method for realizing corpus intention discovery by unsupervised algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113536810A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114220417A (en) * | 2021-12-10 | 2022-03-22 | 京东科技信息技术有限公司 | Intention identification method, device and related equipment |
CN115168593A (en) * | 2022-09-05 | 2022-10-11 | 深圳爱莫科技有限公司 | Intelligent dialogue management system, method and processing equipment capable of self-learning |
-
2021
- 2021-05-28 CN CN202110594293.0A patent/CN113536810A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114220417A (en) * | 2021-12-10 | 2022-03-22 | 京东科技信息技术有限公司 | Intention identification method, device and related equipment |
CN115168593A (en) * | 2022-09-05 | 2022-10-11 | 深圳爱莫科技有限公司 | Intelligent dialogue management system, method and processing equipment capable of self-learning |
CN115168593B (en) * | 2022-09-05 | 2022-11-29 | 深圳爱莫科技有限公司 | Intelligent dialogue management method capable of self-learning and processing equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sadr et al. | Multi-view deep network: a deep model based on learning features from heterogeneous neural networks for sentiment analysis | |
CN108874782B (en) | A kind of more wheel dialogue management methods of level attention LSTM and knowledge mapping | |
Schuurmans et al. | Intent classification for dialogue utterances | |
CN108052583B (en) | E-commerce ontology construction method | |
CN111563164B (en) | Specific target emotion classification method based on graph neural network | |
CN110609899B (en) | Specific target emotion classification method based on improved BERT model | |
CN106354710A (en) | Neural network relation extracting method | |
Zhou et al. | Sentiment analysis of text based on CNN and bi-directional LSTM model | |
Jayawardana et al. | Semi-supervised instance population of an ontology using word vector embedding | |
CN113536810A (en) | Method for realizing corpus intention discovery by unsupervised algorithm | |
CN110297888A (en) | A kind of domain classification method based on prefix trees and Recognition with Recurrent Neural Network | |
Rong et al. | Structural information aware deep semi-supervised recurrent neural network for sentiment analysis | |
Chu et al. | Unsupervised feature learning architecture with multi-clustering integration RBM | |
CN114936277A (en) | Similarity problem matching method and user similarity problem matching system | |
Wan et al. | Pre-training time-aware location embeddings from spatial-temporal trajectories | |
CN113642674A (en) | Multi-round dialogue classification method based on graph convolution neural network | |
CN115809314A (en) | Multitask NL2SQL method based on double-layer multi-gated expert Mixed Model (MMOE) | |
CN114896392A (en) | Work order data clustering method and device, electronic equipment and storage medium | |
CN116010581A (en) | Knowledge graph question-answering method and system based on power grid hidden trouble shooting scene | |
Upadhya et al. | Deep neural network models for question classification in community question-answering forums | |
Shi et al. | Web service network embedding based on link prediction and convolutional learning | |
Zhen et al. | The research of convolutional neural network based on integrated classification in question classification | |
CN113988002A (en) | Approximate attention system and method based on neural clustering method | |
Sharma et al. | Emotion quantification and classification using the neutrosophic approach to deep learning | |
CN112560440A (en) | Deep learning-based syntax dependence method for aspect-level emotion analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211022 |