CN106446951A - Singular value selection-based integrated learning device - Google Patents
Singular value selection-based integrated learning device Download PDFInfo
- Publication number
- CN106446951A CN106446951A CN201610856560.6A CN201610856560A CN106446951A CN 106446951 A CN106446951 A CN 106446951A CN 201610856560 A CN201610856560 A CN 201610856560A CN 106446951 A CN106446951 A CN 106446951A
- Authority
- CN
- China
- Prior art keywords
- sample
- singular value
- sigma
- singular
- alpha
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a singular value selection-based integrated learning device. The realization process of the integrated learning device includes the following steps that: normalization preprocessing is performed on training sample sets; sampling with replacement is performed through adopting a Bootstrap random sampling method, so that M new sample sets can be generated; partial singular value decomposition (SVD) is performed on each sample in the M new sample sets, and the corresponding singular values as well as left and right singular vectors of each sample are obtained; k singular values as well as left and right singular vectors corresponding to the k singular values are randomly extracted each time, so that a 2D SVM (support vector machine) base learning device can be obtained, and the M new sample sets are trained, so that M 2D SVM (support vector machine) base learning devices can be obtained; and the base learning devices are combined according to a relative majority voting criterion, so that the integrated learning device can be obtained, and the obtained learning device is adopted to classify and identify samples to be classified. With the singular value selection-based integrated learning device of the invention adopted, problems such as large computation amount, curse of dimensionality, structure information loss of data and inherent correlation damage of the data which are caused by the application of an existing classifier to stretching matrix object data into high-dimension vectors can be solved.
Description
Technical field
The present invention relates to machine learning and technical field of image processing, particularly to two-dimentional and above tensor data it is
The sorting technique of input sample, can be used for target detection, pattern recognition, Activity recognition.
Background technology
With the Internet and computer technology fast development, the quantity of information that the mankind faced within short decades is comparable to the mankind
The summation of all ages quantity of information in the past.Work, life and thinking that the continuous development of data gives people class bring major transformation,
The development of data is mainly reflected in two aspects:One be data scale increasing;Two is that the structure of data becomes increasingly complex,
Compared to traditional paper Word message, webpage, black white image, coloured image, medical image, satellite remote sensing images, video etc.
Various information formats can not need more dimensions to go to represent the spy of data object with the simply representation such as vector
Levy, thus resulted in the increase of the quantity of information such as data dimension.It can thus be stated that " big data " is the descriptor of information age.
Different sorting algorithms is likely to be obtained different classification performances, but does not have a kind of sorting algorithm can be to all of application
All obtain good result.With regard to the design of grader, up to the present, various sorting techniques by data mining,
Machine learning, the researcher of statistics, pattern recognition and neurobiology aspect propose, such as specialist system, correlation rule, decision-making
Tree, Bayes classifier, support vector machine, neutral net, genetic algorithm etc., these methods have been applied to different necks
Domain, is that the development of scientific research cause is made that contribution.
Although the sorting technique having been proposed that obtains certain success in some fields, but in the most mathematics mentioned
In learning method, data is typically represented using vector pattern, so that can be to two dimension based on the learning algorithm under vector pattern
And above tensor data is learnt, it usually needs first the data of tensor pattern is carried out vectorization expansion, then again with passing
The learning algorithm of system is learnt., stretch simply by image taking black white image as a example, be converted into vector pattern and carry out
Process, ignore the intrinsic structural information of the data such as relative position between pixel in original image, can destroy initial data when
Hollow structure, loses the relevant information between data structure.If initial data is larger, being processed as vector pattern can lead to
Dimension increases it is possible to " dimension disaster " or " the high dimension of small sample " problem can be led to, thus making the grader effect obtaining not
Good.
Therefore, for problem above it is necessary to provide a kind of 2D SVM integrated study, can not break legacy data when
Hollow structure, and the advantage of integrated study can be utilized, the precision of lifting grader.
Content of the invention
In order to overcome the disadvantages mentioned above of prior art, the invention provides a kind of integrated study based on singular value selection
Device, by randomly selecting to the part singular value of sample, thus improve the multiformity between each base grader, obtains general
The strong integrated result of change ability.
The technical solution adopted for the present invention to solve the technical problems is:A kind of integrated study based on singular value selection
Device, comprises the steps:
Step one, pretreatment is normalized to training sample set;
Step 2, from the pretreated training sample of normalization concentrate put back to using Bootstrap arbitrary sampling method
Ground sampling, produces M new sample set;
Each sample in step 3, the sample set new to M carries out the decomposition of part SVD, obtains each sample corresponding
Singular value and left and right singular vector;
Step 4, randomly draw k singular value and its corresponding left and right singular vector every time, generate the study of 2D SVM base
Device, is trained to M new sample set respectively, obtains M 2D SVM base grader;
Step 5, criterion of being voted according to relative majority merge base grader and obtain integrated study device, with integrated obtaining
Habit device is treated classification samples and is carried out Classification and Identification.
Compared with prior art, the positive effect of the present invention is:
(1) present invention solves existing grader and is stretched as high dimension vector to matrix object (such as image, EEG etc.) data
The problems such as huge and dimension disaster and data the structural information of the operand bringing is lost with inherent related damage.
(2) present invention passes through the part singular value decomposition of sample, randomly selects in the singular values and singular vectors obtaining
A number of singular values and singular vectors, have carried out compression noise reduction to sample to a certain extent.
(3) present invention selects to construct the larger base grader of multiformity by singular value, thus creating generalization ability
Strong is integrated.
Brief description
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is the schematic flow sheet of the present invention.
Specific embodiment
A kind of integrated study device based on singular value selection, as shown in figure 1, comprise the following steps:
Step one, to training sample setIt is normalized pretreatment to obtain
To training sample setThe method being normalized pretreatment adopts 0-1 standardization, is to original sample
The linear transformation of notebook data, [0,1] is interval so that result is fallen, and transfer function is as follows:
Wherein, Xi,Xi'∈Rp×qIt is i-th sample, yi∈ Y, Y={ C1,C2,…,CNIt is sample Xi,Xi' corresponding class
Label is it can be seen that sample Xi,Xi' it is to be represented in the form of two-dimensional matrix;max(Xi) represent take training sample XiMiddle element
Maximum, min (Xi) represent take training sample XiThe minima of middle element, repmat { min (Xi)}∈Rp×qRepresent sample
Little value matrix, the element in matrix is all min (Xi);Finally with all pretreated training sample Xi' and its label yiConstitute
Pretreated training sample set
Step 2, the training sample after normalization are concentrated and are sampled with putting back to using Bootstrap arbitrary sampling method,
Final M new sample set of generation
Training sample set is carried out putting back to uniformly random sampling, obtains an equal amount of new samples with former sample collection
Collection.Due to being to put back to ground uniform sampling, so sample does not once have selected probability can be expressed as,
As n → ∞, p ≈ 0.368, therefore, each base learner only employs the sample that initial training concentrates about 63.2%, Ke Yili
Sample with remaining about 36.8% is made checking collection and to be the Generalization Capability of learner is carried out " bag is outer to be estimated " (OOB), this by
Through proving unbiased esti-mator, so not needing to carry out cross validation or single test set again in Ensemble Learning Algorithms
To obtain the unbiased esti-mator of test set error.
Step 3, the decomposition of part SVD is carried out to each sample in sample set, obtain the corresponding singular value of each sample and
Left and right singular vector:
(1) first to sample XiCarry out SVD entirely to decompose, decomposed form is:Xi=U Σ VT, wherein Xi∈Rp×qIt is a two dimension
Matrix, U ∈ Rp×pIt is XiLeft singular vector composition matrix, Σ ∈ Rp×qIt is XiSingular value composition diagonal matrix, VT
∈Rq×qIt is XiRight singular vector composition matrix;
(2) in most cases, the larger part singular value of matrix just can be very good the essential information of representing matrix, uses
The big singular value of front r (i.e. front r larger singular value) carrys out approximate description sample Xi, thus matrix has been carried out to a certain degree
On Information Compression, part singular value decomposition form is as follows:
Wherein σip, μip, vipFor XiP-th singular value and its corresponding left and right singular vector.
Step 4, randomly draw k singular value and its corresponding left and right singular vector every time, generate the study of 2D SVM base
Device:
4.1 for two classification problems
A given training datasetWherein Xi∈Rp×qIt is i-th input sample, yi∈{-1,1}
It is sample XiCorresponding class label is it can be seen that input sample XiRepresent with a matrix type.
4.1.1 2D SVM support vector machine are defined as follows:
s.t.yi(<W,Xi>+b)≥1-ξi, i=1 ..., n (4)
ξi>=0, i=1 ..., n (5)
Wherein, W determines the direction of Optimal Separating Hyperplane for method matrix, and b is displacement item.
4.1.2 by method of Lagrange multipliers can get formula (3)-(5) Lagrangian as follows:
Wherein αi>=0, βi>=0 is Lagrange multiplier.
Make L (W, b, α, β, ξ) to W, b, ξiPartial derivative be zero can to obtain:
C=αi+βi, i=1 ..., n (9)
By formula (7)-(9) substitute into formula (4) can get formula (3)-(5) dual problem as follows:
0≤αi≤ C, i=1 ..., n (12)
Wherein,<Xi,Xj>It is XiWith XjInner product.
4.1.3 working as input sample XiWhen being vectorial form, then Optimized model (3)-(5) deteriorate to the supporting vector of standard
Machine.If we to be calculated using the primitive form of input sample<Xi,Xj>, then the optimal solution of (3)-(5) and linear support vector
As the solution of machine.Due to " dimension disaster " and small sample problem, support vector machine can not effective processing array sample problem, then
Optimized model (3)-(5) also can run into same problem.Specifically, the dual form of Optimized model (3)-(5) only relies upon
Inner product between sample data, and in (10)<Xi,Xj>Inner product operation does not utilize the structural information of sample data well.
SVD in view of matrix decomposes structural information and the inherence dependency that can preferably embody matrix data, utilizes
The SVD of matrix decomposes to replace original matrix to input, thus improving the calculating of matrix inner products. advantage of this is that:On the one hand
The identification ability of learning machine can be improved;On the other hand the pace of learning of learning machine can be accelerated.
4.1.4 decompose the r singular value and accordingly obtaining each sample according to sample is carried out in step 3 with part SVD
Left and right singular vector, therefrom randomly selects k singular value and its corresponding left and right singular vector, respectivelyWithThen matrix XiInner product meter with Xj
Calculate as follows:
(13) are substituted in (10), obtains:
0≤αi≤ C, i=1 ..., n (16)
Training sample can be expressed as by the weight matrix W that (7) can be seen that Optimal Separating Hyperplane linear on two-dimensional space
Combination, Optimized model (14)-(16) are 2D-SVM, and 2D-SVM is it is seen that expansion on two-dimensional matrix for the linear SVM
Optimized model (14)-(16) therefore can be solved by exhibition with SMO algorithm.
Base learner 2D SVM classifier f (X) categorised decision function is:
Wherein σip、σq、uip、uq、vipAnd vqIt is respectively XiUnusual with the singular value of X and corresponding left singular vector and the right side
Vector.
4.2 for 2D SVM many classification problems
Using the strategy of " one-to-one " (OvO), specific as follows:
Data-oriented collectionyi∈{C1,C2,…,CN, this N number of classification is joined by OvO two-by-two
Right, thus producing N (N-1)/2 two classification task, such as OvO will be for distinguishing classification CiAnd CjOne grader of training, this classification
Device is DtIn CiClass sample is as positive example, CjClass sample is as counter-example.In test phase, new samples will be simultaneously presented to be owned
Grader, then will obtain N (N-1)/2 classification results, and final result can be produced by ballot:I.e. predicted most
Classification is as final classification result.
More than the explanation to 2D SVM, the definition of similar support vector machine (SVM) and derive explanation.
<Xi,Xj>Here processing data dimension is different, and what SVM was processed is the inner product of vectorial sample, and 2D proposed by the present invention
SVM can be with the inner product of processing array (as the matrix of picture pixels composition) sample.
Directly but inner product can not be carried out to two matrixes, so before carrying out inner product, first SVD decomposition is carried out to matrix
(a part of singular value big to front R randomly chooses, and similar random forest is the base grader good in order to obtain multiformity),
Then replace with selected singular values and singular vectors original matrix to carry out inner product, so, matrix 1. can be avoided to pull into vector
The structural deterioration causing is (such as:In picture, two pixels are originally upper and lower relation, and after pulling into vector, position relationship destroys) 2. permissible
Accelerate the arithmetic speed of inner product.
Finally, M new sample set is obtained based on the Bootstrap arbitrary sampling method in step 2To each
Sample set adopts above training method, finally gives M 2D SVM base grader { h1,h2,…,hM}.
Step 5, criterion of being voted according to relative majority merge base grader and obtain integrated study device:
In this step, merge base grader and obtain a higher grader -- integrated study device:
Base grader is merged using relative majority ballot criterion, the mathematical expression of compound mode is as follows:
Finally, treat classification samples with the integrated study device obtaining and carry out Classification and Identification.
Claims (6)
1. a kind of integrated study device based on singular value selection it is characterised in that:Comprise the steps:
Step one, pretreatment is normalized to training sample set;
Step 2, from the pretreated training sample of normalization concentrate taken out with putting back to using Bootstrap arbitrary sampling method
Sample, produces M new sample set;
Each sample in step 3, the sample set new to M carries out the decomposition of part SVD, obtains each sample unusual accordingly
Value and left and right singular vector;
Step 4, randomly draw k singular value and its corresponding left and right singular vector every time, generate 2D SVM base learner, point
Other new sample set individual to M is trained, and obtains M 2D SVM base grader;
Step 5, criterion of being voted according to relative majority merge base grader and obtain integrated study device, with the integrated study device obtaining
Treat classification samples and carry out Classification and Identification.
2. a kind of integrated study device based on singular value selection according to claim 1 it is characterised in that:Described in step
Training sample set is normalized pretreatment method as follows:
S101, to training sample setIn each sample carry out pretreatment respectively, obtain the training after normalization
Sample Xi' and its class label yi:
Wherein, max (Xi) represent training sample XiIn maximum, min (Xi) represent training sample XiIn minima, repmat
{min(Xi)}∈Rp×qRepresent sample minimum matrix, the element in matrix is min (Xi);
S102, with all pretreated training sample Xi' and its class label yiConstitute pretreated training sample set
3. a kind of integrated study device based on singular value selection according to claim 1 it is characterised in that:Described in step 3
The method that each sample in the sample set new to M carries out the decomposition of part SVD is as follows:
S301, sample XiSVD decomposed form be:Xi=U Σ VT, wherein Xi∈Rp×qIt is a two-dimentional matrix, U ∈ Rp×pIt is
XiLeft singular vector composition matrix, Σ ∈ Rp×qIt is XiSingular value composition diagonal matrix, VT∈Rq×qIt is XiThe right side unusual
The matrix of vector composition;
S302, with the big singular value of front r come approximate description sample Xi, part singular value decomposition form is as follows:
Wherein σip, μip, vipFor XiP-th singular value and its corresponding left and right singular vector.
4. a kind of integrated study device based on singular value selection according to claim 3 it is characterised in that:Described in step 4
The generation method of 2D SVM base learner is as follows:
S401, for two classification task, give a training datasetWherein Xi∈Rp×qIt is i-th input
Sample, yi∈ { -1,1 } is sample XiCorresponding class label;
S402,2D SVM support vector machine are defined as follows:
Wherein, W is method matrix, and b is displacement item;
By the dual problem that method of Lagrange multipliers obtains 2D SVM support vector machine it is:
Wherein,<Xi,Xj>It is Xi∈Rp×qWith Xj∈Rp×qInner product, C=αi+βi, i=1 ..., n, in formula:αi>=0, βi>=0 is
Lagrange multiplier;
K singular value and its corresponding is randomly selected S403, the r singular value from each sample and corresponding left and right singular vector
Left and right singular vector, respectivelyWith
Then matrix XiAnd XjInner product be calculated as follows:
Formula (2) is substituted in formula (1), the final form obtaining 2D SVM is as follows:
Base learner 2D SVM classifier f (X) categorised decision function is:
Wherein, σip、σq、uip、uq、vipAnd vqIt is respectively XiWith the singular value of X and corresponding left singular vector and right unusual to
Amount;
S404, for 2D SVM, many classification task carry out categorised decision using man-to-man strategy.
5. a kind of integrated study device based on singular value selection according to claim 4 it is characterised in that:Step S404 institute
The method using man-to-man strategy carries out categorised decision of stating is:Data-oriented collectionXi∈Rp×q,yi∈{C1,
C2,…,CN, N number of classification is matched two-by-two, thus producing N (N-1)/2 two classification task.
6. a kind of integrated study device based on singular value selection according to claim 4 it is characterised in that:Described in step 5
According to the method that relative majority ballot criterion merges base grader it is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610856560.6A CN106446951A (en) | 2016-09-28 | 2016-09-28 | Singular value selection-based integrated learning device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610856560.6A CN106446951A (en) | 2016-09-28 | 2016-09-28 | Singular value selection-based integrated learning device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106446951A true CN106446951A (en) | 2017-02-22 |
Family
ID=58169509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610856560.6A Pending CN106446951A (en) | 2016-09-28 | 2016-09-28 | Singular value selection-based integrated learning device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106446951A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220346A (en) * | 2017-05-27 | 2017-09-29 | 荣科科技股份有限公司 | A kind of higher-dimension deficiency of data feature selection approach |
CN108615024A (en) * | 2018-05-03 | 2018-10-02 | 厦门大学 | A kind of EEG signal disaggregated model based on genetic algorithm and random forest |
CN110163252A (en) * | 2019-04-17 | 2019-08-23 | 平安科技(深圳)有限公司 | Data classification method and device, electronic equipment, storage medium |
WO2020056621A1 (en) * | 2018-09-19 | 2020-03-26 | 华为技术有限公司 | Learning method and apparatus for intention recognition model, and device |
CN111767803A (en) * | 2020-06-08 | 2020-10-13 | 北京理工大学 | Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar |
CN113095994A (en) * | 2021-04-15 | 2021-07-09 | 河北工程大学 | Robust digital image blind watermarking method based on support vector machine and singular value decomposition |
-
2016
- 2016-09-28 CN CN201610856560.6A patent/CN106446951A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220346A (en) * | 2017-05-27 | 2017-09-29 | 荣科科技股份有限公司 | A kind of higher-dimension deficiency of data feature selection approach |
CN107220346B (en) * | 2017-05-27 | 2021-04-30 | 荣科科技股份有限公司 | High-dimensional incomplete data feature selection method |
CN108615024A (en) * | 2018-05-03 | 2018-10-02 | 厦门大学 | A kind of EEG signal disaggregated model based on genetic algorithm and random forest |
WO2020056621A1 (en) * | 2018-09-19 | 2020-03-26 | 华为技术有限公司 | Learning method and apparatus for intention recognition model, and device |
CN110163252A (en) * | 2019-04-17 | 2019-08-23 | 平安科技(深圳)有限公司 | Data classification method and device, electronic equipment, storage medium |
CN110163252B (en) * | 2019-04-17 | 2023-11-24 | 平安科技(深圳)有限公司 | Data classification method and device, electronic equipment and storage medium |
CN111767803A (en) * | 2020-06-08 | 2020-10-13 | 北京理工大学 | Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar |
CN111767803B (en) * | 2020-06-08 | 2022-02-08 | 北京理工大学 | Identification method for anti-target attitude sensitivity of synthetic extremely-narrow pulse radar |
CN113095994A (en) * | 2021-04-15 | 2021-07-09 | 河北工程大学 | Robust digital image blind watermarking method based on support vector machine and singular value decomposition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Karim et al. | Multivariate LSTM-FCNs for time series classification | |
CN106446951A (en) | Singular value selection-based integrated learning device | |
Chen et al. | A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network | |
Xu et al. | Weighted multi-view clustering with feature selection | |
Yang et al. | DropSample: A new training method to enhance deep convolutional neural networks for large-scale unconstrained handwritten Chinese character recognition | |
Qi et al. | Feature selection and multiple kernel boosting framework based on PSO with mutation mechanism for hyperspectral classification | |
Yin et al. | Multi-view clustering via pairwise sparse subspace representation | |
CN107679580A (en) | A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth | |
Fakhrou et al. | Smartphone-based food recognition system using multiple deep CNN models | |
CN107341510B (en) | Image clustering method based on sparse orthogonality double-image non-negative matrix factorization | |
Liu et al. | A particle swarm optimization based simultaneous learning framework for clustering and classification | |
Sharma et al. | Deep eigen space based ASL recognition system | |
Yang et al. | Associative memory optimized method on deep neural networks for image classification | |
Zu et al. | Canonical sparse cross-view correlation analysis | |
Golinko et al. | Generalized feature embedding for supervised, unsupervised, and online learning tasks | |
Zhao et al. | Classification and saliency detection by semi-supervised low-rank representation | |
Zhang et al. | A novel effective and efficient capsule network via bottleneck residual block and automated gradual pruning | |
Mehta et al. | Deep convolutional neural network-based effective model for 2D ear recognition using data augmentation | |
Li et al. | Enhancing representation of deep features for sensor-based activity recognition | |
Ma et al. | Multiple costs based decision making with back-propagation neural networks | |
Lyndon et al. | Convolutional Neural Networks for Medical Clustering. | |
ALISAWI et al. | Real-Time Emotion Recognition Using Deep Learning Methods: Systematic Review | |
Shi et al. | Semi-supervised learning based on intra-view heterogeneity and inter-view compatibility for image classification | |
Wu et al. | A semi-supervised deep network embedding approach based on the neighborhood structure | |
Bao et al. | Tensor classification network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |
|
RJ01 | Rejection of invention patent application after publication |