CN113239795A - Face recognition method, system and computer storage medium based on local preserving projection - Google Patents

Face recognition method, system and computer storage medium based on local preserving projection Download PDF

Info

Publication number
CN113239795A
CN113239795A CN202110515326.8A CN202110515326A CN113239795A CN 113239795 A CN113239795 A CN 113239795A CN 202110515326 A CN202110515326 A CN 202110515326A CN 113239795 A CN113239795 A CN 113239795A
Authority
CN
China
Prior art keywords
image
sub
face recognition
images
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110515326.8A
Other languages
Chinese (zh)
Inventor
张凯
周建设
董心
姜阳
朱丽雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202110515326.8A priority Critical patent/CN113239795A/en
Publication of CN113239795A publication Critical patent/CN113239795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a face recognition method, a face recognition system and a computer storage medium based on local preserving projection, which are based on the characteristics of LPP, improve a projection matrix, extract the features of a face image by adopting a supervised two-way two-dimensional LPP method, classify the face image by using the sub-feature voting method provided by the application, and are more effective than the LPP method through experimental verification.

Description

Face recognition method, system and computer storage medium based on local preserving projection
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, a face recognition system, and a computer storage medium with locally maintained projection.
Background
The face recognition technology is one of biological characteristic identification technologies, and is mainly used for identity recognition through visual characteristics of faces. Face recognition relates to a number of research areas, such as image processing, machine vision, pattern recognition, etc. With the development of computer technology and the wide application demand, the face recognition technology has become a research hotspot and has been developed greatly. The main methods for recognizing human faces currently include a geometric feature method, a feature subspace method, a neural network method, an elastic matching method, a hidden Markov model method and the like.
The feature subspace method mainly comprises PCA, LDA, ICA and the like, which are traditional linear dimension reduction methods and have good effect on global linear structures, but the human face features are usually high-order nonlinear, and the inherent relation hidden in data can be lost by using the methods to reduce dimensions.
How to improve the face recognition effect is a technical problem which still needs to be further solved at present.
Disclosure of Invention
In view of the above technical problems, the present application provides a face recognition method, a face recognition system and a computer storage medium with local preserving projection.
A first aspect of the present application provides a face recognition method of a partial preserving projection, where the method includes:
s1, receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
s2, the processing module adopts a local preserving projection algorithm to extract the features of the facial image and adopts a nearest neighbor classifier to classify;
and S3, outputting a face recognition result.
Preferably, the local preserving projection algorithm is a two-way two-dimensional local preserving projection algorithm (S (2D)2LPP)。
Preferably, before the processing device performs feature extraction on the facial image by using a local preserving projection algorithm, the method further comprises the following training steps:
dividing one face image in a training image set into m sub-images, wherein the sub-images form a sub-image set, each non-empty subset of the set is a sub-feature, so that one face image has n sub-features, and n is 2m-1;
The weight of the sub-features is calculated by a statistical method, and for a training gallery with s individuals and t images of each person, the specific steps are as follows:
(11) dividing the image into m sub-images to form n sub-features Xi,i=1,…,n;
(12) Each corresponding sub-image in the training image forms a sub-image set, a projection matrix of each sub-image is calculated, and a feature vector of the sub-image corresponding to each image is calculated;
(13) combining the feature vectors of the sub-images contained in each sub-feature into a large vector in order, and using the large vector as the feature vector of the sub-feature;
(14) and carrying out multiple groups of random detection, and counting the contribution of each sub-feature to classification as the weight of the sub-feature.
Preferably, the random detection method specifically comprises the following steps:
(21) training set s individuals, wherein each person randomly selects one image as a known image, and then randomly selects 30% of the rest images as unknown images;
(22) respectively representing the characteristics of the whole image by each sub-characteristic, calculating the distance between the unknown image and each known image, adopting a k-adjacent mode, and when the k images closest to the unknown image have the known images from the same person as the unknown image, calculating the counter sum of the sub-characteristiciAdding 1:
Figure BDA0003061539420000021
(23) after multiple groups of random detection are carried out, the weight of each sub-feature is calculated
Figure BDA0003061539420000022
Wherein, Count is the total number of analog detections.
Preferably, the processing device performs feature extraction on the facial image by using a local preserving projection algorithm and performs classification by using a nearest neighbor classifier, and includes the following steps:
(31) partitioning the known images, calculating the characteristics of each sub-image, and combining the characteristics into a characteristic vector of each sub-characteristic;
(32) for an unknown image to be detected, firstly, partitioning the image, and extracting a feature vector of a sub-feature;
(33) calculating the distance between the corresponding sub-features of the unknown image and the known image, and adopting a k-neighborhood voting mode to obtain the weight wiThe ticket is cast to k known images with the closest distance to the ticket;
(34) counting the number of tickets obtained by each known image, and considering that the unknown image and the known image which has the most tickets are the same person.
Preferably, the features of the known image are extracted using the following formula:
Figure BDA0003061539420000031
preferably, said S (2D)2The projection formula of LPP is:
yi=VTXiU
where V is the projection matrix in the image column direction and U is the projection matrix in the image row direction.
Preferably, the output face recognition result comprises a picture and/or a known image of the face of the captured user and a corresponding recognition result text expression.
A second aspect of the present application provides a face recognition system with local preserving projection, the system includes a response module, a processing module, and an output module:
the response module is used for receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
the processing module is used for extracting the features of the facial image by adopting a local preserving projection algorithm and classifying by adopting a nearest neighbor classifier;
and the output module is used for outputting the face recognition result.
A third aspect of the present application provides a face recognition apparatus that locally preserves projection, characterized in that the apparatus includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the face recognition method of partial preserving projection as described above.
A fourth aspect of the present application provides a storage medium, which is characterized by storing computer instructions, when called, for executing the method for recognizing a face with partial preserving projection as described above.
The invention has the beneficial effects that:
the method is based on the characteristics of the LPP, the projection matrix is improved, the characteristics of the face image are extracted by adopting a supervised two-way two-dimensional LPP method, the sub-characteristic voting method provided by the application is used for classification, and the method is more effective than the LPP method through experimental verification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a face recognition method with partial preserving projection disclosed in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a face recognition system with partial preserving projection disclosed in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a face recognition device with a partially maintained projection according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which the present invention product is usually put into use, it is only for convenience of describing the present application and simplifying the description, but it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus, should not be construed as limiting the present application.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a face recognition method with local preserving projection according to an embodiment of the present application. As shown in fig. 1, a face recognition method with local preserving projection according to an embodiment of the present application includes:
s1, receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
s2, the processing module adopts a local preserving projection algorithm to extract the features of the facial image and adopts a nearest neighbor classifier to classify;
and S3, outputting a face recognition result.
In the embodiment of the application, aiming at the problem of face recognition under the condition of a single sample, the application is based on a local preserving projection method, and provides a supervised bidirectional two-dimensional local preserving projection method for extracting face features. In the identification stage, a method for voting the sub-features of the image is provided, the method firstly divides the face image into blocks, a subset of the sub-images is taken as a sub-feature and the weight is counted, and finally the voting mode is adopted for identification. Through a series of experiments on YaleB and Yale face libraries, the result shows that the method can achieve certain recognition effect.
In this alternative embodiment, the partial preserving projection algorithm is a two-way two-dimensional partial preserving projection algorithm (S (2D)2LPP)。
In this optional embodiment, before the processing device performs feature extraction on the facial image by using a local preserving projection algorithm, the method further includes a training step of:
dividing one face image in a training image set into m sub-images, wherein the sub-images form a sub-image set, each non-empty subset of the set is a sub-feature, so that one face image has n sub-features, and n is 2m-1;
The weight of the sub-features is calculated by a statistical method, and for a training gallery with s individuals and t images of each person, the specific steps are as follows:
(11) dividing the image into m sub-images to form n sub-features Xi,i=1,…,n;
(12) Each corresponding sub-image in the training image forms a sub-image set, a projection matrix of each sub-image is calculated, and a feature vector of the sub-image corresponding to each image is calculated;
(13) combining the feature vectors of the sub-images contained in each sub-feature into a large vector in order, and using the large vector as the feature vector of the sub-feature;
(14) and carrying out multiple groups of random detection, and counting the contribution of each sub-feature to classification as the weight of the sub-feature.
In this optional embodiment, the specific process of the random detection method is as follows:
(21) training set s individuals, wherein each person randomly selects one image as a known image, and then randomly selects 30% of the rest images as unknown images;
(22) respectively representing the characteristics of the whole image by each sub-characteristic, calculating the distance between the unknown image and each known image, adopting a k-adjacent mode, and when the k images closest to the unknown image have the known images from the same person as the unknown image, calculating the counter sum of the sub-characteristiciAdding 1:
Figure BDA0003061539420000071
(23) after multiple groups of random detection are carried out, the weight of each sub-feature is calculated
Figure BDA0003061539420000072
Wherein, Count is the total number of analog detections.
In this optional embodiment, the processing device performs feature extraction on the facial image by using a local preserving projection algorithm, and performs classification by using a nearest neighbor classifier, and includes the following steps:
(31) partitioning the known images, calculating the characteristics of each sub-image, and combining the characteristics into a characteristic vector of each sub-characteristic;
(32) for an unknown image to be detected, firstly, partitioning the image, and extracting a feature vector of a sub-feature;
(33) calculating the distance between the sub-features corresponding to the unknown image and the known image, and adopting k neighborsThe way of close voting gives the weight wiThe ticket is cast to k known images with the closest distance to the ticket;
(34) counting the number of tickets obtained by each known image, and considering that the unknown image and the known image which has the most tickets are the same person.
In this alternative embodiment, the features of the known image are extracted using the following formula:
Figure BDA0003061539420000073
in this alternative embodiment, said S (2D)2The projection formula of LPP is:
yi=VTXiU
where V is the projection matrix in the image column direction and U is the projection matrix in the image row direction.
In this alternative embodiment, the output face recognition result includes a picture of the captured user's face and/or a known image and a corresponding recognition result text representation.
In the embodiment of the present application, the output result may include the captured facial picture and/or known image of the user and the corresponding recognition result text expression, and the user may visually see the facial picture of the user, the pre-stored known image for comparison, and the corresponding recognition result text expression, so that the user may also make a human judgment on the recognition result, and subsequently may input the judgment result into the training step described above for evaluation of the training process or optimization of the training process as a training template.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a face recognition system with local preserving projection according to an embodiment of the present application. As shown in fig. 2, a face recognition system with local preserving projection according to an embodiment of the present application includes a response module, a processing module, and an output module:
the response module is used for receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
the processing module is used for extracting the features of the facial image by adopting a local preserving projection algorithm and classifying by adopting a nearest neighbor classifier;
and the output module is used for outputting the face recognition result.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a face recognition device with a local preserving projection according to an embodiment of the present application. As shown in fig. 3, a face recognition device with a locally maintained projection according to an embodiment of the present application is characterized in that the face recognition device includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the face recognition method of partial preserving projection as described above.
Example four
An embodiment of the present application provides a storage medium, where the storage medium stores a computer instruction, and when the computer instruction is called, the computer instruction is used to execute the face recognition method of local preserving projection according to the first embodiment.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A face recognition method of local preserving projection is characterized in that: the method comprises the following steps:
s1, receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
s2, the processing module adopts a local preserving projection algorithm to extract the features of the facial image and adopts a nearest neighbor classifier to classify;
and S3, outputting a face recognition result.
2. The method of claim 1, wherein: the local preserving projection algorithm is a two-way two-dimensional local preserving projection algorithm (S (2D)2LPP)。
3. The method of claim 1, wherein: before the processing device adopts a local preserving projection algorithm to extract the features of the face image, the method further comprises the following training steps:
dividing one face image in a training image set into m sub-images, wherein the sub-images form a sub-image set, each non-empty subset of the set is a sub-feature, so that one face image has n sub-features, and n is 2m-1;
The weight of the sub-features is calculated by a statistical method, and for a training gallery with s individuals and t images of each person, the specific steps are as follows:
(11) dividing the image into m sub-images to form n sub-features Xi,i=1,…,n;
(12) Each corresponding sub-image in the training image forms a sub-image set, a projection matrix of each sub-image is calculated, and a feature vector of the sub-image corresponding to each image is calculated;
(13) combining the feature vectors of the sub-images contained in each sub-feature into a large vector in order, and using the large vector as the feature vector of the sub-feature;
(14) and carrying out multiple groups of random detection, and counting the contribution of each sub-feature to classification as the weight of the sub-feature.
4. The method of claim 2, wherein: the random detection method comprises the following specific processes:
(21) training set s individuals, wherein each person randomly selects one image as a known image, and then randomly selects 30% of the rest images as unknown images;
(22) respectively representing the characteristics of the whole image by each sub-characteristic, calculating the distance between the unknown image and each known image, adopting a k-adjacent mode, and when the k images closest to the unknown image have the known images from the same person as the unknown image, calculating the counter sum of the sub-characteristiciAdding 1:
Figure RE-FDA0003094182210000021
(23) after multiple groups of random detection are carried out, the weight of each sub-feature is calculated
Figure RE-FDA0003094182210000022
Wherein, Count is the total number of analog detections.
5. The method of claim 1, wherein: the processing device adopts a local preserving projection algorithm to extract the features of the facial image and adopts a nearest neighbor classifier to classify, and the processing device comprises the following steps:
(31) partitioning the known images, calculating the characteristics of each sub-image, and combining the characteristics into a characteristic vector of each sub-characteristic;
(32) for an unknown image to be detected, firstly, partitioning the image, and extracting a feature vector of a sub-feature;
(33) calculating the distance between the corresponding sub-features of the unknown image and the known image, and adopting a k-neighborhood voting mode to obtain the weight wiThe ticket is cast to k known images with the closest distance to the ticket;
(34) counting the number of tickets obtained by each known image, and considering that the unknown image and the known image which has the most tickets are the same person.
6. The method of claim 4, wherein: the features of the known image are extracted using the following formula:
Figure RE-FDA0003094182210000023
7. the method according to any one of claims 1 to 5, wherein: the S (2D)2The projection formula of LPP is:
yi=VTXiU
where V is the projection matrix in the image column direction and U is the projection matrix in the image row direction.
8. The method of claim 6, wherein: the output face recognition result comprises a face picture and/or a known image of the captured user and a corresponding recognition result literal expression.
9. A partially preserving projected face recognition system, the system comprising a response module, a processing module, an output module:
the response module is used for receiving and responding to the face recognition request, capturing a face picture of the user and transmitting the face picture to the processing module;
the processing module is used for extracting the features of the facial image by adopting a local preserving projection algorithm and classifying by adopting a nearest neighbor classifier;
and the output module is used for outputting the face recognition result.
10. A face recognition device that locally preserves projection, the device comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the partial preserving projection face recognition method according to any one of claims 1 to 7.
11. A storage medium storing computer instructions which, when invoked, perform a method of locally preserving projected face recognition as claimed in any one of claims 1 to 7.
CN202110515326.8A 2021-05-12 2021-05-12 Face recognition method, system and computer storage medium based on local preserving projection Pending CN113239795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110515326.8A CN113239795A (en) 2021-05-12 2021-05-12 Face recognition method, system and computer storage medium based on local preserving projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110515326.8A CN113239795A (en) 2021-05-12 2021-05-12 Face recognition method, system and computer storage medium based on local preserving projection

Publications (1)

Publication Number Publication Date
CN113239795A true CN113239795A (en) 2021-08-10

Family

ID=77133920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110515326.8A Pending CN113239795A (en) 2021-05-12 2021-05-12 Face recognition method, system and computer storage medium based on local preserving projection

Country Status (1)

Country Link
CN (1) CN113239795A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700089A (en) * 2015-03-24 2015-06-10 江南大学 Face identification method based on Gabor wavelet and SB2DLPP
CN112396004A (en) * 2020-11-23 2021-02-23 支付宝(杭州)信息技术有限公司 Method, apparatus and computer-readable storage medium for face recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700089A (en) * 2015-03-24 2015-06-10 江南大学 Face identification method based on Gabor wavelet and SB2DLPP
CN112396004A (en) * 2020-11-23 2021-02-23 支付宝(杭州)信息技术有限公司 Method, apparatus and computer-readable storage medium for face recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
相子喜等: "新闻图像中重要人物的自动检测和识别研究", 《科学技术与工程》 *

Similar Documents

Publication Publication Date Title
CN105518709B (en) The method, system and computer program product of face for identification
KR102554724B1 (en) Method for identifying an object in an image and mobile device for practicing the method
KR100846500B1 (en) Method and apparatus for recognizing face using extended Gabor wavelet features
Huang et al. Robust face detection using Gabor filter features
Nelson et al. Large-scale tests of a keyed, appearance-based 3-D object recognition system
Barnouti et al. Face recognition: A literature review
JP5959093B2 (en) People search system
Mery et al. Automatic facial attribute analysis via adaptive sparse representation of random patches
Zhang Off‐line signature verification and identification by pyramid histogram of oriented gradients
Anila et al. Simple and fast face detection system based on edges
US8953852B2 (en) Method for face recognition
KR101743927B1 (en) Method and apparatus for generating an objected descriptor using extended curvature gabor filter
Haji et al. Real time face recognition system (RTFRS)
Hotta Robust face detection under partial occlusion
Mahalingam et al. Face verification of age separated images under the influence of internal and external factors
Sakthimohan et al. Detection and Recognition of Face Using Deep Learning
CN114360182B (en) Intelligent alarm method, device, equipment and storage medium
WO2019091988A1 (en) Change-aware person identification
JP2013218605A (en) Image recognition device, image recognition method, and program
CN113239795A (en) Face recognition method, system and computer storage medium based on local preserving projection
CN115797970A (en) Dense pedestrian target detection method and system based on YOLOv5 model
JP2002208011A (en) Image collation processing system and its method
Anila et al. Global and local classifiers for face recognition
Baidyk et al. Face recognition using a permutation coding neural classifier
KR100711223B1 (en) Face recognition method using Zernike/LDA and recording medium storing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810