CN112836904A - Body quality index prediction method based on face characteristic points - Google Patents

Body quality index prediction method based on face characteristic points Download PDF

Info

Publication number
CN112836904A
CN112836904A CN202110370398.8A CN202110370398A CN112836904A CN 112836904 A CN112836904 A CN 112836904A CN 202110370398 A CN202110370398 A CN 202110370398A CN 112836904 A CN112836904 A CN 112836904A
Authority
CN
China
Prior art keywords
face
data
quality index
body quality
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110370398.8A
Other languages
Chinese (zh)
Inventor
姜红
王文锦
刘明
王誉锦
龚惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Hospital Fudan University
Original Assignee
Zhongshan Hospital Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Hospital Fudan University filed Critical Zhongshan Hospital Fudan University
Priority to CN202110370398.8A priority Critical patent/CN112836904A/en
Publication of CN112836904A publication Critical patent/CN112836904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a body quality index prediction method based on human face characteristic points, which comprises the steps of associating a human face picture with a corresponding body quality index belonging category to establish a regression model of the body quality index, taking an extracted characteristic value in the human face picture and the corresponding BMI category as data, sending the data into the regression model for training to obtain a regression relation between the human face characteristic and the body quality index belonging category, and obtaining the corresponding body quality index belonging category only by shooting the human face and sending the human face into the trained model. The measuring method shortens the flow and reduces unnecessary time; the requirement of the required hardware equipment only needs to be provided with a camera and a processor (CPU), in other words, the technology can achieve the purpose through a mobile phone, and from the current social development, the mobile phone is almost a standard tool for everyone, which undoubtedly enables the body health measurement to be more portable and universal, thereby arousing more attention to the self health of people.

Description

Body quality index prediction method based on face characteristic points
Technical Field
The invention relates to an identification technology, in particular to a body quality index prediction method based on face characteristic points.
Background
Currently, the Body Mass Index (BMI) measurement is generally calculated by dividing weight in kilograms by height in meters squared, i.e., by using the height and weight of a participant as input and calculating the BMI by a formula. The method is easily interfered by external factors such as the limitation of measuring equipment, a field and the like, and the process of measuring, recording and calculating one by one is very complicated, so that the measurement and calculation of people cannot be quickly completed. From previous studies, it has been shown that adult participants tend to overestimate their own weight or underestimate their own weight, resulting in inaccurate BMI measurements.
Disclosure of Invention
The method solves the problem of inaccurate measurement of the body quality index caused by external factors, provides a body quality index prediction method based on the human face characteristic points, eliminates the external factors, and predicts the body quality index only through human face recognition.
The technical scheme of the invention is as follows: a body quality index prediction method based on face feature points is characterized by associating a face photo with a corresponding body quality index belonged category, establishing a regression model of the body quality index, taking an extracted feature value in the face photo and the corresponding BMI category as data, collecting data of order of magnitude to form a training data set and a test data set, sending the training data set into the regression model for training, obtaining a regression relationship between the face feature and the body quality index belonged category from the training data set, applying the trained regression relationship to the test data set and checking effects, and obtaining the corresponding body quality index belonged category as long as the model is sent to the training after the face is shot and the training is carried out if the effect requirements are met.
Preferably, the specific steps for extracting the face feature value are as follows:
1) acquiring personal facial feature points of five sense organs and cheek contours through a camera and a model, and recording 2D XY coordinates of the feature points as 2-dimensional data or 3D XYZ coordinates of the feature points as 3-dimensional data;
2) subtracting the mean value of all feature points of each dimension from each dimension of the obtained face feature point coordinates, and dividing the mean value by the length of a nose or the inner side distance of two corners of the eye to finish normalization so as to obtain a normalized face feature point value; carrying out contrast correction on the normalized face characteristic point values through all templates selected from the faces to obtain corrected face characteristic point values;
3) and (3) carrying out feature combination on the face feature point values processed in the step 2) to obtain feature values.
Preferably, the step 3) is performed according to the corrected face characteristic point value obtained in the step 2), the width of the cheekbone and the maxilla, the face length-to-area ratio and the lower face-to-face height ratio are obtained, and one or more combinations are selected as characteristic values; or directly using the data matrix of each feature point value of the human face as the feature value.
Preferably, the body mass index categories are: and when the classification model is input, the face characteristic value and the corresponding BMI category are used as data, and the ratio of the training data set to the test data set is 9: 1.
Preferably, the result of the regression model belongs to support vector machine regression, the two-dimensional data model selects a linear kernel, and the three-dimensional data model selects a polynomial kernel.
Preferably, the template is face data selected from a data set as a template, and once the face data is selected as training template data, all the face data are corrected through the template data; the correction is to store the characteristic point value of the face to be corrected after normalization as a matrix A, then calculate the pseudo-inverse of A, multiply the A with the point matrix B of the template face stored in advance to obtain the relation omega, and finally multiply A with omega to obtain the matrix A of the corrected face*
The invention has the beneficial effects that: according to the body mass index prediction method based on the face characteristic points, the tedious steps of height and weight required to be measured in the traditional BMI measurement are omitted, and the BMI measurement can be completed by scanning the face or taking a picture instead, so that the measuring method shortens the flow and reduces the unnecessary time; the requirement of the required hardware equipment only needs to be provided with a camera and a processor (CPU), in other words, the technology can achieve the purpose through a mobile phone, and from the current social development, the mobile phone is almost a standard tool for everyone, which undoubtedly enables the body health measurement to be more portable and universal, thereby arousing more attention to the self health of people.
Drawings
FIG. 1 is a schematic diagram of facial feature points in a body mass index prediction method based on facial feature points according to the present invention;
FIG. 2 is a schematic diagram illustrating the human face normalization and correction processes in the method for predicting the body mass index based on the human face feature points according to the present invention;
FIG. 3 is a graph of the results of different kernel functions of the regression model of the support vector machine under 2D data according to the present invention;
FIG. 4 is a graph of the results of different kernel functions of the regression model of the support vector machine under 3D data according to the present invention;
FIG. 5 is a confusion matrix diagram of a random forest classification model under 2D data according to the present invention;
FIG. 6 is a confusion matrix diagram of a random forest classification model under 3D data according to the present invention;
FIG. 7 is a graph of results of different kernel functions of the random forest classification model of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The human face can display physiological information of a person, can reflect sex, age, emotion and the like of the person, and the facial obesity can also be reflected, and the facial obesity is in positive correlation with Body Mass Index (BMI), so that the facial information of the subject in the angle of the face can be collected through a camera, and the BMI measurement and the obesity grade classification can be carried out.
The body quality index prediction method based on the face characteristic points comprises the following concrete implementation steps:
1. acquiring human face characteristic points: as shown in fig. 1, 68 individual facial feature points of the facial features of the five sense organs and cheek contours are obtained through a camera and model, and 2D (X-Y) or 3D (X-Y-Z) coordinates of the feature points are recorded. The 2D/3D human face characteristic point coordinates can respectively reflect the position information of the human face in a two-dimensional plane or a three-dimensional space.
2. Face normalization and rectification: as shown in fig. 2, the normalization is performed by subtracting the mean value of all feature points in each dimension from each dimension of the coordinates of the acquired human face feature points (2-dimensional mean value is the sum of the X-axis coordinate values of all feature points divided by the number of feature points, 2-dimensional mean value is the X-axis and the Y-axis, and 3-dimensional mean value is the X-axis, the Y-axis and the Z-axis), and dividing by the length of the nose (the distance from the point 28 to the point 34) or the distance between the inside of the canthus (the distance from the point 40 to the point 43), so as to reduce the measured value and simplify the. And correcting the normalized face feature point value through a pre-selected template (a front face).
The template is a face data selected from the data set as a template, or the face may not be selected as the template, but once a face data is selected as a training template data, all face data need to be corrected through the same face template data. The correction means that the characteristic point values of the face to be corrected after normalization are stored as a matrix A, taking two dimensions as an example (three-dimensional theory)
Figure BDA0003009052000000041
Then pseudo-inverse is calculated for X and is matched with template data matrix stored in advance
Figure BDA0003009052000000042
Multiplying to obtain a relation omega, and finally multiplying A and omega to obtain a data matrix of the corrected face
Figure BDA0003009052000000043
3. Inputting a face characteristic value: the 68 individual face feature point values processed in step 2 may be combined to obtain and input feature values such as the width of the cheekbone and the jawbone, the face length to area ratio, the lower face to face height ratio, and the like, or may be directly input as feature values, that is, feature values obtained by normalizing and correcting the 2D (X-Y) or 3D (X-Y-Z) 68 individual face feature point coordinates are input as feature values.
4. Associating the face photo with the corresponding BMI belonging category, establishing a regression model of the BMI, extracting and correcting the characteristic points in the face photo to obtain A*The matrix and the corresponding BMI category are used as data which form a training data set and a testing data set, the number ratio of the training data set to the testing data set is 9:1 (the training data and the testing data are randomly extracted from the data set), the training data set is sent to a regression model for training, and A can be obtained from the training data set*And the regression relationship of the BMI belongs to the category, the trained regression relationship is applied to the test data set, the effect is checked, and if the result is verified to meet the effect requirement, the BMI value can be obtained only by shooting the face and sending the face into the training model.
A may not be used here*The matrix, as described in step 3, uses the scale or scale values as eigenvalues along with the corresponding BMI class as one datum. When the amount of face feature data is sufficient, the data may be associated with the age group and gender as feature values.
The classification in the method is to classify the BMI into<18.5 emaciation, normal BMI of 18.5-24, BMI>Since 24 is the three types of the partial weight, when the classification model is input, a is used*And the corresponding BMI category as a data input, the ratio of the training set to the test set is 9:1, and A can be obtained from the training data set through the training of the classification model*And classification relationships to BMI categories and applying the relationships to test data sets and verifying effectiveness. The BMI classification can also be divided into 5 classes, and then the obesity or wasting classes are added, not to be unduly described herein, but are most commonly classified hereinAnd (5) performing identification.
The result of the given regression model belongs to the support vector machine regression, wherein three different kernel functions are selected for respective comparison, the purpose of selecting the kernel function is to simplify the process of model calculation, from the result, for the support vector machine regression model, the two-dimensional model selection linear kernel has the best effect, and the three-dimensional model selection polynomial kernel has the best effect, as shown in fig. 3 and 4. The result of the given classification model is classified in a random forest, where the maximum depth is chosen to be 5, i.e. the input A*Screening A by up to 5 layers of relationship*The classification is one of thin, normal and heavy, so as to achieve the classification of the human face. Fig. 5 and 6 show the confusion matrix diagram of the random forest classification model under the 2D and 3D data. FIG. 7 is a graph of results of different kernel functions of the random forest classification model of the present invention.
Performing regression and classification prediction: after the characteristic value is obtained, regression prediction is performed through a model such as Support vector machine regression (Support vector machine regression), Gaussian process regression (Gaussian process regression) or least square method which is trained in advance, and the obtained result is the result (specific BMI value) of the last BMI measured. And the classification model selects a pre-trained Random Forest classification (Random Forest model) model, Naive Bayesian (Naive Bayesian) model or Support vector machine (Support vector machine) classification model for prediction, and the obtained result is the BMI classification result (which obesity grade belongs to).
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A body quality index prediction method based on face feature points is characterized in that a face photo and a body quality index category corresponding to the face photo are associated, a regression model of the body quality index is established, an extracted feature value in the face photo and a corresponding BMI category are used as data, data of the order of magnitude are collected to form a training data set and a testing data set, the training data set is sent to the regression model for training, the regression relation between the face feature and the body quality index category is obtained from the training data set, the trained regression relation is applied to the testing data set, effects are tested, if verification meets the effect requirements, the corresponding body quality index category can be obtained as long as the face is shot and sent to the trained model.
2. The method for predicting the body quality index based on the face feature points according to claim 1, wherein the face feature value extraction is implemented by the following specific steps:
1) acquiring personal facial feature points of five sense organs and cheek contours through a camera and a model, and recording 2D XY coordinates of the feature points as 2-dimensional data or 3D XYZ coordinates of the feature points as 3-dimensional data;
2) subtracting the mean value of all feature points of each dimension from each dimension of the obtained face feature point coordinates, and dividing the mean value by the length of a nose or the inner side distance of two corners of the eye to finish normalization so as to obtain a normalized face feature point value; carrying out contrast correction on the normalized face characteristic point values through all templates selected from the faces to obtain corrected face characteristic point values;
3) and (3) carrying out feature combination on the face feature point values processed in the step 2) to obtain feature values.
3. The method for predicting the body quality index based on the facial feature points according to claim 2, wherein the step 3) is performed according to the corrected facial feature point values obtained in the step 2), so as to obtain the width of the zygomatic bone and the maxilla, the ratio of the face length to the area, and the ratio of the lower face to the face height, and select one or more combinations as the feature values; or directly using the data matrix of each feature point value of the human face as the feature value.
4. The method for predicting a body mass index based on facial feature points according to any one of claims 1 to 3, wherein the body mass index categories are: and when the classification model is input, the face characteristic value and the corresponding BMI category are used as data, and the ratio of the training data set to the test data set is 9: 1.
5. The method of claim 2, wherein the result of the regression model is a support vector machine regression, the two-dimensional data model is a linear kernel, and the three-dimensional data model is a polynomial kernel.
6. The method for predicting a body mass index based on facial feature points according to claim 2, wherein the template is a piece of face data selected from a data set as a template, and once the face data is selected as training template data, all the face data are corrected by the template data; the correction is to store the characteristic point value of the face to be corrected after normalization as a matrix A, then to calculate the pseudo-inverse of A, and to multiply with the point matrix B of the template face stored in advance to obtain the relation omega, and finally to multiply A and omega to obtain the data matrix A of the corrected face*
CN202110370398.8A 2021-04-07 2021-04-07 Body quality index prediction method based on face characteristic points Pending CN112836904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110370398.8A CN112836904A (en) 2021-04-07 2021-04-07 Body quality index prediction method based on face characteristic points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110370398.8A CN112836904A (en) 2021-04-07 2021-04-07 Body quality index prediction method based on face characteristic points

Publications (1)

Publication Number Publication Date
CN112836904A true CN112836904A (en) 2021-05-25

Family

ID=75930687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110370398.8A Pending CN112836904A (en) 2021-04-07 2021-04-07 Body quality index prediction method based on face characteristic points

Country Status (1)

Country Link
CN (1) CN112836904A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591704A (en) * 2021-07-30 2021-11-02 四川大学 Body mass index estimation model training method and device and terminal equipment
CN117316455A (en) * 2023-10-10 2023-12-29 尚氏(广东)大数据服务有限公司 Apparatus and method for BMI data analysis and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2012126505A (en) * 2012-06-25 2013-12-27 Федеральное государственное бюджетное учреждение "Уральский научно-исследовательский институт охраны материнства и младенчества" Министерства здравоохранения и социального развития Российской Федерации (ФГБУ "НИИ ОММ" Минздравсоцразвития России) METHOD FOR PREDICTING THE DEVELOPMENT OF Bronchopulmonary Dysplasia In Preterm Babies With Extremely Low Body Weight During Birth
CN103593870A (en) * 2013-11-12 2014-02-19 杭州摩图科技有限公司 Picture processing device and method based on human faces
CN107644203A (en) * 2017-09-12 2018-01-30 江南大学 A kind of feature point detecting method of form adaptive classification
CN107688827A (en) * 2017-08-24 2018-02-13 西安交通大学 A kind of user identity attribute forecast method based on user's daily behavior feature
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN111985477A (en) * 2020-08-27 2020-11-24 平安科技(深圳)有限公司 Monocular camera-based animal body online claims checking method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2012126505A (en) * 2012-06-25 2013-12-27 Федеральное государственное бюджетное учреждение "Уральский научно-исследовательский институт охраны материнства и младенчества" Министерства здравоохранения и социального развития Российской Федерации (ФГБУ "НИИ ОММ" Минздравсоцразвития России) METHOD FOR PREDICTING THE DEVELOPMENT OF Bronchopulmonary Dysplasia In Preterm Babies With Extremely Low Body Weight During Birth
CN103593870A (en) * 2013-11-12 2014-02-19 杭州摩图科技有限公司 Picture processing device and method based on human faces
CN107688827A (en) * 2017-08-24 2018-02-13 西安交通大学 A kind of user identity attribute forecast method based on user's daily behavior feature
CN107644203A (en) * 2017-09-12 2018-01-30 江南大学 A kind of feature point detecting method of form adaptive classification
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN111985477A (en) * 2020-08-27 2020-11-24 平安科技(深圳)有限公司 Monocular camera-based animal body online claims checking method and device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591704A (en) * 2021-07-30 2021-11-02 四川大学 Body mass index estimation model training method and device and terminal equipment
CN113591704B (en) * 2021-07-30 2023-08-08 四川大学 Body mass index estimation model training method and device and terminal equipment
CN117316455A (en) * 2023-10-10 2023-12-29 尚氏(广东)大数据服务有限公司 Apparatus and method for BMI data analysis and computer storage medium

Similar Documents

Publication Publication Date Title
US8958607B2 (en) Liveness detection
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN110348543A (en) Eye fundus image recognition methods, device, computer equipment and storage medium
US9177230B2 (en) Demographic analysis of facial landmarks
CN107832802A (en) Quality of human face image evaluation method and device based on face alignment
CN112836904A (en) Body quality index prediction method based on face characteristic points
US11798299B2 (en) Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation
CN109829362A (en) Safety check aided analysis method, device, computer equipment and storage medium
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
CN110123331A (en) Human body body and constitution collecting method, device and storage medium
CN108596087A (en) A kind of driving fatigue degree detecting regression model based on dual network result
CN108416253A (en) Avoirdupois monitoring method, system and mobile terminal based on facial image
US20180089851A1 (en) Distance measuring device for human body features and method thereof
JP2013003706A (en) Facial-expression recognition device, method, and program
CN113313795A (en) Virtual avatar facial expression generation system and virtual avatar facial expression generation method
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
JP4481142B2 (en) Face shape classification method, face shape evaluation method, and face shape evaluation apparatus
JP5897745B2 (en) Aging analysis method and aging analyzer
KR20220072484A (en) Autism spectrum disorder evaluation method based on facial expression analysis
Liu et al. Comparison of surrogate 50th percentile human headforms to an adult male sample using three-dimensional modeling and principal component analysis
Wang et al. Facial Landmark based BMI Analysis for Pervasive Health Informatics
CN116631019B (en) Mask suitability detection method and device based on facial image
CN109770845A (en) The device and method for measuring interpupillary distance
Adibuzzaman et al. Towards in situ affect detection in mobile devices: A multimodal approach
Biagiotti et al. Predicting respirator size and fit from 2D images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210525