CN110084191B - Eye shielding detection method and system - Google Patents

Eye shielding detection method and system Download PDF

Info

Publication number
CN110084191B
CN110084191B CN201910343779.XA CN201910343779A CN110084191B CN 110084191 B CN110084191 B CN 110084191B CN 201910343779 A CN201910343779 A CN 201910343779A CN 110084191 B CN110084191 B CN 110084191B
Authority
CN
China
Prior art keywords
eye
image
neural network
convolutional neural
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910343779.XA
Other languages
Chinese (zh)
Other versions
CN110084191A (en
Inventor
黄国恒
胡可
谢靓茹
黄斯彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910343779.XA priority Critical patent/CN110084191B/en
Publication of CN110084191A publication Critical patent/CN110084191A/en
Application granted granted Critical
Publication of CN110084191B publication Critical patent/CN110084191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an eye shielding detection method and system, wherein the method comprises the following steps: acquiring an eye region image from the acquired face image; extracting features from the eye region image by using a first convolutional neural network, calculating an eye position, extracting features from the eye region image by using a second convolutional neural network, obtaining a feature map of the eye region image, and obtaining eye features from the feature map according to the obtained eye position; and performing deconvolution processing on the obtained eye characteristics, and calculating a result indicating the eye shielding condition according to the image obtained by the deconvolution processing. According to the eye shielding detection method and system, the shielding condition of the eyes of the user can be detected according to the acquired face image of the user, and compared with the prior art, the eye shielding detection method and system can not be used for prompting the user to shield the eyes by a tester, so that the workload of the tester is reduced.

Description

Eye shielding detection method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to an eye shielding detection method and system.
Background
When the vision test chart is used for testing the vision of a user, the eyes of the user are sequentially tested respectively, the user is required to open the eyes to be tested currently and the other eyes are covered, in the prior art, the tester is required to prompt the user to open which eyes or cover which eyes, and great workload is brought to the tester.
Disclosure of Invention
In view of the above, the invention provides an eye shielding detection method and system, which can detect the shielding condition of eyes of a user according to the acquired face image of the user, and reduce the workload of a tester compared with the prior art.
In order to solve the technical problems, the invention provides the following technical scheme:
an eye occlusion detection method, comprising:
acquiring an eye region image from the acquired face image;
extracting features from the eye region image by using a first convolutional neural network, calculating an eye position, extracting features from the eye region image by using a second convolutional neural network, obtaining a feature map of the eye region image, and obtaining eye features from the feature map according to the obtained eye position;
and performing deconvolution processing on the obtained eye characteristics, and calculating a result indicating the eye shielding condition according to the image obtained by the deconvolution processing.
Preferably, the face image is processed by using a third convolutional neural network and a fourth convolutional neural network which are sequentially cascaded, and an eye region image is obtained from the face image;
the third convolutional neural network is used for carrying out operation processing on the face image, and calculating a series of boundary frames for framing the face and a series of boundary frames for framing eyes according to the face image;
the fourth convolutional neural network is used for carrying out operation processing on the face image, screening more accurate bounding boxes for framing the face according to a series of bounding boxes for framing the face output by the third convolutional neural network, and screening more accurate bounding boxes for framing the eyes according to a series of bounding boxes for framing the eyes output by the third convolutional neural network.
Preferably, the third convolutional neural network comprises 1 convolutional pooling layer, a convolutional layer and a pooling layer which are sequentially cascaded, and the convolutional layers are used for generating 2 feature graphs for classification, 4 feature graphs for boundary box judgment and 10 feature graphs for judging facial feature points;
the fourth convolutional neural network comprises 2 convolutional pooling layers, pooling layers and full-connection layers which are sequentially cascaded and used for generating 2 feature maps used for classification, 4 feature maps used for boundary box judgment and 10 feature maps used for judging facial feature points.
Preferably, the third convolutional neural network is specifically configured to calibrate the obtained bounding box according to the regression value of the bounding box, and the fourth convolutional neural network is specifically configured to calibrate the obtained bounding box according to the regression value of the bounding box.
Preferably, the third convolutional neural network is specifically configured to merge overlapping bounding boxes using a non-maximum suppression method, and the fourth convolutional neural network is specifically configured to merge overlapping bounding boxes using a non-maximum suppression method.
Preferably, deconvolution processing of the obtained ocular feature includes: and performing deconvolution on the obtained eye feature to obtain an image with the same size as the original image, and performing convolution processing with the sliding step length of the obtained image being a preset value to obtain an image with the image size larger than the original image.
Preferably, the calculating the result indicating the eye shielding condition according to the image obtained by deconvolution processing includes:
extracting features from the image obtained by deconvolution to obtain feature vectors for describing the features of the image;
inputting the obtained feature vector into a pre-trained classifier, and outputting a result of whether eyes are blocked or not by the classifier.
An eye shielding detection system is used for executing the eye shielding detection method.
According to the eye shielding detection method and system, firstly, an eye area image is obtained from an obtained face image, then a first convolution neural network is used for extracting features of the eye area image, the eye position is calculated, a second convolution neural network is used for extracting features of the eye area image, a feature map of the eye area image is obtained, eye features are obtained from the feature map according to the obtained eye position, deconvolution processing is further carried out on the obtained eye features, and a result indicating the eye shielding condition is calculated according to the image obtained by deconvolution processing. According to the eye shielding detection method and system, the shielding condition of the eyes of the user can be detected according to the acquired face image of the user, and compared with the prior art, the eye shielding detection method and system can not be used for prompting the user to shield the eyes by a tester, so that the workload of the tester is reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an eye occlusion detection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a process for acquiring an eye region image from a facial image in accordance with an embodiment of the present invention;
fig. 3 is a flowchart of a process for obtaining an eye feature from an eye region image in accordance with an embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an eye shielding detection method according to an embodiment of the invention, and as can be seen from the figure, the eye shielding detection method of the embodiment includes the following steps:
s10: an eye region image is acquired from the acquired face image.
For a face image of a user obtained by photographing, an eye region image is acquired from the obtained face image. Referring to fig. 2, fig. 2 is a flowchart illustrating a process of acquiring an eye region image from a face image according to the present embodiment, and the present embodiment may use a third convolutional neural network 30 and a fourth convolutional neural network 31, which are cascaded in sequence, to process the face image to acquire the eye region image from the face image.
Specifically, the third convolutional neural network 30 is configured to perform an operation on the face image, and calculate a series of bounding boxes for framing the face and a series of bounding boxes for framing eyes according to the face image; the fourth convolutional neural network 31 is configured to perform operation processing on the facial image, screen a more accurate bounding box for framing the face according to a series of bounding boxes for framing the face output by the third convolutional neural network, and screen a more accurate bounding box for framing the eyes according to a series of bounding boxes for framing the eyes output by the third convolutional neural network.
Specifically, the third convolutional neural network 30 generates pictures with different sizes according to different scaling ratios of the input pictures to form a feature pyramid of the pictures, and then performs operation processing on the pictures with different sizes to calculate a series of bounding boxes for framing faces and a series of bounding boxes for framing eyes. In practical application, the third convolutional neural network can also calculate a bounding box for framing other facial feature points according to the input facial image, such as a nose part, a left-mouth corner part, a right-mouth corner part and the like.
The third convolutional neural network 30 is specifically used for calibrating the obtained bounding box according to its regression value. The regression value of the boundary box characterizes the probability that the image in the boundary box contains the feature to be framed, the obtained boundary box can be calibrated according to the regression value of each boundary box, the incorrect boundary box in the obtained boundary box is eliminated, and the boundary box with smaller probability of containing the feature is eliminated.
The third convolutional neural network 30 is also specifically configured to merge overlapping bounding boxes using non-maximum suppression, thus excluding the overlapping bounding boxes therein to obtain a more accurate bounding box that frames the feature region. The method for merging bounding boxes by using the non-maximum suppression method specifically comprises the following steps:
s20: sequentially arranging all the obtained bounding boxes from high to low according to the scores, and selecting the bounding box with the highest score, wherein the score of the bounding box is the regression value of the bounding box;
s21: traversing the rest boundary frames except the boundary frame with the highest score, and deleting the boundary frame with the highest score if the overlapping area of the boundary frame and the boundary frame with the highest score is larger than a preset threshold value;
s22: the bounding box with the highest score is reselected from the remaining bounding boxes, and if only one bounding box remains, the bounding box is output, and if more than one bounding box remains, the process proceeds to step S21.
The fourth convolutional neural network 31 is used for performing arithmetic processing on the face image, and training the boundary frame for framing the face output from the third convolutional neural network in the network and training the boundary frame for framing the eye output from the third convolutional neural network in the network. The fourth convolutional neural network 31 is specifically configured to calibrate the obtained bounding box according to its regression value. And eliminating incorrect bounding boxes in the obtained bounding boxes according to regression values of the bounding boxes, and eliminating the bounding boxes with smaller probability of containing the features. The fourth convolutional neural network 31 is also specifically configured to merge overlapping bounding boxes using non-maximum suppression, thus excluding the overlapping bounding boxes therein to obtain a more accurate bounding box that frames the feature region. The method of merging overlapping bounding boxes using non-maximum suppression methods may refer to the process described above.
In a specific example, the third convolutional neural network comprises 1 convolutional pooling layer, a convolutional layer and a pooling layer which are sequentially cascaded and are used for generating 2 feature maps for classification, 4 feature maps for boundary box judgment and 10 feature maps for judging facial feature points; the fourth convolutional neural network comprises 2 convolutional pooling layers, pooling layers and full-connection layers which are sequentially cascaded and are used for generating 2 feature maps for classification, 4 feature maps for boundary box judgment and 10 feature maps for judging facial feature points. A Multi-tasking cascade convolutional neural network (Multi-task Cascaded Convolutional Network, MTCNN) may be employed to detect faces and facial feature point locations, with P-Net in the MTCNN model as the third convolutional neural network and R-Net in the MTCNN model as the fourth convolutional neural network.
The input face image is processed by a third convolutional neural network and a fourth convolutional neural network, and finally the output image marks a boundary frame for framing the face and a boundary frame for framing eyes.
S11: extracting features from the eye region image by using a first convolutional neural network, calculating an eye position, extracting features from the eye region image by using a second convolutional neural network, obtaining a feature map of the eye region image, and obtaining eye features from the feature map according to the obtained eye position.
Referring to fig. 3, fig. 3 is a flowchart of a process for obtaining an eye feature from an eye area image in the present embodiment. The eye region image is input into the first convolutional neural network 40 for processing, and the eye position is detected to obtain the accurate position of the eye. And the eye region image is input into the second convolutional neural network 41 for processing, so as to obtain a feature map. According to the method, a rough segmentation result graph, namely attention map, is obtained by using the deep network of the second convolution neural network, the approximate positions of an eye are obtained, then the shallow network of the first convolution neural network only needs to pay attention to the approximate positions, the fine position is predicted, and other parts in the image are not paid attention to, so that the learning difficulty is reduced.
S12: and performing deconvolution processing on the obtained eye characteristics, and calculating a result indicating the eye shielding condition according to the image obtained by the deconvolution processing.
Preferably, deconvolution processing of the obtained ocular feature includes: and performing deconvolution on the obtained eye feature to obtain an image with the same size as the original image, and performing convolution processing with the sliding step length of the obtained image being a preset value to obtain an image with the image size larger than the original image. The image processing using the convolutional neural network loses the characteristics contained in the image, and the up-sampling deconvolution processing method can enlarge the image size and enrich the image content by filling the image content.
Further, the calculation of the result indicating the eye shielding condition from the image obtained by the deconvolution processing includes: firstly, extracting features from an image obtained by deconvolution to obtain feature vectors for describing the features of the image, then inputting the obtained feature vectors into a pre-trained classifier, and outputting a result of whether eyes are blocked or not by the classifier. The pre-trained classifier calculates the probability of the eye being blocked according to the input feature vector, and outputs the judgment result of whether the eye is blocked according to the blocking probability.
According to the eye shielding detection method, the shielding condition of the eyes of the user can be detected according to the acquired face image of the user, and compared with the prior art, the eye shielding detection method can be used for prompting the user to shield the eyes without a tester, so that the workload of the tester is reduced.
Correspondingly, the embodiment of the invention also provides an eye shielding detection system which is used for executing the eye shielding detection method.
According to the eye shielding detection system, firstly, an eye area image is obtained from an obtained face image, then, a first convolution neural network is used for extracting features from the eye area image, the eye position is calculated, a second convolution neural network is used for extracting features from the eye area image, a feature map of the eye area image is obtained, the eye features are obtained from the feature map according to the obtained eye position, deconvolution processing is further carried out on the obtained eye features, and a result indicating the eye shielding condition is calculated according to the image obtained by deconvolution processing. According to the eye shielding detection system, the shielding condition of the eyes of the user can be detected according to the acquired face image of the user, compared with the prior art, the eye shielding detection system can not be used for prompting the user to shield the eyes by using a tester, and therefore the workload of the tester is reduced.
The eye shielding detection method and the eye shielding detection system provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (6)

1. An eye shielding detection method is characterized by comprising the following steps:
acquiring an eye region image from the acquired face image;
extracting features from the eye region image by using a first convolutional neural network, calculating an eye position, extracting features from the eye region image by using a second convolutional neural network, obtaining a feature map of the eye region image, obtaining eye features from the feature map according to the obtained eye position, obtaining a rough segmentation result map, namely attention map by using a deep network of the second convolutional neural network, obtaining an approximate position of the eye, and then predicting a fine position by using a shallow network of the first convolutional neural network only needing attention to the approximate position without attention to other parts in the image;
deconvolution processing is carried out on the obtained eye feature, a result indicating the eye shielding condition is calculated according to an image obtained by deconvolution processing, and the deconvolution processing is carried out on the obtained eye feature and comprises the following steps: performing deconvolution on the obtained eye features to obtain an image with the same size as the original image, and performing convolution processing with the sliding step length of the obtained image being a preset value to obtain an image with the image size larger than that of the original image;
the calculation of the result indicating the eye shielding condition according to the image obtained by deconvolution processing comprises the following steps: extracting features from the image obtained by deconvolution to obtain feature vectors for describing the features of the image; inputting the obtained feature vector into a pre-trained classifier, and outputting a result of whether eyes are blocked or not by the classifier.
2. The eye occlusion detection method according to claim 1, wherein the face image is processed using a third convolutional neural network and a fourth convolutional neural network that are sequentially cascaded, and an eye region image is acquired from the face image;
the third convolutional neural network is used for carrying out operation processing on the face image, and calculating a series of boundary frames for framing the face and a series of boundary frames for framing eyes according to the face image;
the fourth convolutional neural network is used for carrying out operation processing on the face image, screening more accurate bounding boxes for framing the face according to a series of bounding boxes for framing the face output by the third convolutional neural network, and screening more accurate bounding boxes for framing the eyes according to a series of bounding boxes for framing the eyes output by the third convolutional neural network.
3. The eye occlusion detection method according to claim 2, wherein the third convolutional neural network comprises 1 convolutional pooling layer, convolutional layer and pooling layer which are sequentially cascaded and are used for generating 2 feature maps for classification, 4 feature maps for boundary frame judgment and 10 feature maps for judging facial feature points;
the fourth convolutional neural network comprises 2 convolutional pooling layers, pooling layers and full-connection layers which are sequentially cascaded and used for generating 2 feature maps used for classification, 4 feature maps used for boundary box judgment and 10 feature maps used for judging facial feature points.
4. The eye occlusion detection method according to claim 2, wherein the third convolutional neural network is specifically configured to calibrate the obtained bounding box according to a regression value of the bounding box, and the fourth convolutional neural network is specifically configured to calibrate the obtained bounding box according to a regression value of the bounding box.
5. The eye occlusion detection method of claim 2, wherein the third convolutional neural network is specifically configured to merge overlapping bounding boxes using a non-maximum suppression method, and the fourth convolutional neural network is specifically configured to merge overlapping bounding boxes using a non-maximum suppression method.
6. An eye occlusion detection system for performing the eye occlusion detection method of any of claims 1-5.
CN201910343779.XA 2019-04-26 2019-04-26 Eye shielding detection method and system Active CN110084191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910343779.XA CN110084191B (en) 2019-04-26 2019-04-26 Eye shielding detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910343779.XA CN110084191B (en) 2019-04-26 2019-04-26 Eye shielding detection method and system

Publications (2)

Publication Number Publication Date
CN110084191A CN110084191A (en) 2019-08-02
CN110084191B true CN110084191B (en) 2024-02-23

Family

ID=67416957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910343779.XA Active CN110084191B (en) 2019-04-26 2019-04-26 Eye shielding detection method and system

Country Status (1)

Country Link
CN (1) CN110084191B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210150751A1 (en) * 2019-11-14 2021-05-20 Nec Laboratories America, Inc. Occlusion-aware indoor scene analysis
CN112929638B (en) * 2019-12-05 2023-12-15 北京芯海视界三维科技有限公司 Eye positioning method and device and multi-view naked eye 3D display method and device
CN111598018A (en) * 2020-05-19 2020-08-28 北京嘀嘀无限科技发展有限公司 Wearing detection method, device, equipment and storage medium for face shield

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262695A1 (en) * 2016-03-09 2017-09-14 International Business Machines Corporation Face detection, representation, and recognition
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107784300A (en) * 2017-11-30 2018-03-09 西安科锐盛创新科技有限公司 Anti- eye closing photographic method and its system
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN109344763A (en) * 2018-09-26 2019-02-15 汕头大学 A kind of strabismus detection method based on convolutional neural networks
CN109657591A (en) * 2018-12-12 2019-04-19 东莞理工学院 Face recognition method and device based on concatenated convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240102A (en) * 2017-04-20 2017-10-10 合肥工业大学 Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262695A1 (en) * 2016-03-09 2017-09-14 International Business Machines Corporation Face detection, representation, and recognition
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107784300A (en) * 2017-11-30 2018-03-09 西安科锐盛创新科技有限公司 Anti- eye closing photographic method and its system
CN109344763A (en) * 2018-09-26 2019-02-15 汕头大学 A kind of strabismus detection method based on convolutional neural networks
CN109657591A (en) * 2018-12-12 2019-04-19 东莞理工学院 Face recognition method and device based on concatenated convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Real-time image-based driver fatigue detection and monitoring system for monitoring driver vigilance;Xinxing Tang et al.;2016 35th Chinese Control Conference (CCC);4188-4193 *
人机交互中的人脸表情识别研究进展;薛雨丽等;中国图象图形学报;第14卷(第5期);764-772 *
图像人脸检测及超分辨率处理;刘欢喜;中国优秀硕士学位论文全文数据库 (信息科技辑);I138-432 *

Also Published As

Publication number Publication date
CN110084191A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110084191B (en) Eye shielding detection method and system
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN109613002B (en) Glass defect detection method and device and storage medium
CN106803067B (en) Method and device for evaluating quality of face image
US11610289B2 (en) Image processing method and apparatus, storage medium, and terminal
CN109671058B (en) Defect detection method and system for large-resolution image
CN110619618A (en) Surface defect detection method and device and electronic equipment
CN110111316B (en) Method and system for identifying amblyopia based on eye images
CN108664840A (en) Image-recognizing method and device
EP2202687A2 (en) Image quality evaluation device and method
CN106530271B (en) A kind of infrared image conspicuousness detection method
CN110458790A (en) A kind of image detecting method, device and computer storage medium
US20140240556A1 (en) Image processing apparatus and image processing method
CN110363753A (en) Image quality measure method, apparatus and electronic equipment
CN112308797B (en) Corner detection method and device, electronic equipment and readable storage medium
RU2697627C1 (en) Method of correcting illumination of an object on an image in a sequence of images and a user's computing device which implements said method
CN115578614B (en) Training method of image processing model, image processing method and device
CN111626379B (en) X-ray image detection method for pneumonia
CN115138059A (en) Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
Roziere et al. Tarsier: Evolving noise injection in super-resolution gans
CN115641641A (en) Motion recognition model training method and device and motion recognition method and device
CN114742774B (en) Non-reference image quality evaluation method and system integrating local and global features
US20170091955A1 (en) State determination device, eye closure determination device, state determination method, and storage medium
KR101509991B1 (en) Skin texture measurement method and apparatus
CN113506260B (en) Face image quality assessment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant