WO2019061659A1 - 人脸图像眼镜去除方法、装置及存储介质 - Google Patents

人脸图像眼镜去除方法、装置及存储介质 Download PDF

Info

Publication number
WO2019061659A1
WO2019061659A1 PCT/CN2017/108758 CN2017108758W WO2019061659A1 WO 2019061659 A1 WO2019061659 A1 WO 2019061659A1 CN 2017108758 W CN2017108758 W CN 2017108758W WO 2019061659 A1 WO2019061659 A1 WO 2019061659A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
glasses
area
face image
pixel point
Prior art date
Application number
PCT/CN2017/108758
Other languages
English (en)
French (fr)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019061659A1 publication Critical patent/WO2019061659A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of computer vision processing technologies, and in particular, to a method and apparatus for removing facial image glasses and a computer readable storage medium.
  • the face image glasses removal scheme adopted in the industry uses a two-dimensional generalized principal component analysis method, which uses the glasses-free face image training feature space to reconstruct the face of the glasses through the input and the original face image.
  • the occlusion area of the glasses is extracted, and the image is error-compensated by error iteration to synthesize the final glasses-free face image.
  • This method works well for input images that are close to the training image, but requires a certain amount of time and a certain number of pictures to be trained, and for the input image that is larger and larger with the training image, although the glasses in the face image are eliminated, the glasses in the face image are eliminated.
  • the damage to the face features is serious, and thus accurate face recognition cannot be performed.
  • the present application provides a method, a device, and a computer readable storage medium for removing facial image glasses, the main purpose of which is to remove glasses in a face image without destroying the original facial features in the face image. Generate a face-free face image to improve the recognition rate of face recognition.
  • the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a face image glasses removal program, and the glasses removal program is implemented by the processor as follows: step:
  • a real-time image acquisition step acquiring a real-time image captured by the camera device, and extracting a face image from the real-time image by using a face recognition algorithm;
  • Image preprocessing step normalizing the face image, and performing face gesture correction using affine transformation to obtain a frontal face image;
  • the glasses removing step determining a glasses area in the frontal face image, searching for the pixel area around the eyeglass area in the frontal face image to fill the eyeglass area, and obtaining a face image of the removed glasses.
  • the glasses removing step comprises:
  • the glasses area determining step comprises:
  • a binarization processing step converting the frontal face image into a grayscale image, and performing binarization processing on the grayscale image to obtain a binarized image;
  • An edge detecting step performing edge detection on the grayscale image to obtain an edge image, and performing an area filling operation on the edge image to obtain an edge-filled image;
  • the glasses area determining step determining the glasses area in the frontal face image according to a preset glasses area determination rule.
  • the glasses area determining step comprises:
  • a rectangular approximation operation is performed on the to-be-determined glasses area to obtain a minimum rectangle including the to-be-determined glasses area as the glasses area of the frontal face image.
  • the present application further provides a method for removing a face image glasses, the method comprising:
  • a real-time image acquisition step acquiring a real-time image captured by the camera device, and extracting a face image from the real-time image by using a face recognition algorithm;
  • Image preprocessing step normalizing the face image, and performing face gesture correction using affine transformation to obtain a frontal face image;
  • the glasses removing step determining a glasses area in the frontal face image, searching for the pixel area around the eyeglass area in the frontal face image to fill the eyeglass area, and obtaining a face image of the removed glasses.
  • the glasses removing step comprises:
  • the glasses area determining step comprises:
  • a binarization processing step converting the frontal face image into a grayscale image, and performing binarization processing on the grayscale image to obtain a binarized image;
  • An edge detecting step performing edge detection on the grayscale image to obtain an edge image, and performing an area filling operation on the edge image to obtain an edge-filled image;
  • the glasses area determining step determining the glasses area in the frontal face image according to a preset glasses area determination rule.
  • the glasses area determining step comprises:
  • a rectangular approximation operation is performed on the to-be-determined glasses area to obtain a minimum rectangle including the to-be-determined glasses area as the glasses area of the frontal face image.
  • the present application further provides a computer readable storage medium including a face image glasses removal program, which is implemented by the processor, as described above. Any step in the method of removing the face image glasses.
  • the method for removing facial image glasses, the electronic device and the computer readable storage medium proposed by the present application firstly obtain two images by performing binarization processing and edge detection on the face image. The overlapping area of the two images, and then determining the area of the glasses according to the position and area of the overlapping area. Finally, searching for pixel point information around the area of the glasses from the face image, and the pixel point information of the area of the glasses The pixel point information around the area of the glasses is replaced, thereby obtaining a face image from which the glasses are removed. In this way, the time of the model training is saved, and the glasses in the face image are effectively removed without destroying the original facial features in the face image.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of a method for removing a face image of an applicant according to the present invention
  • FIG. 2 is a block diagram showing a procedure for removing a face image glasses in FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of a method for removing a face image glasses of the present applicant
  • step S30 is a detailed flowchart of step S30 in the method for removing the face image glasses of the present applicant
  • FIG. 5 is a detailed flowchart of step S40 in the method for removing the face image glasses of the present applicant.
  • the present application provides a method for removing a face image glasses, which is applied to an electronic device 1 .
  • FIG. 1 it is a schematic diagram of an application environment of a preferred embodiment of the method for removing facial image glasses of the present applicant.
  • the electronic device 1 may be a rack server, a blade server, a tower server or a rack server, a smart phone, a tablet computer, a portable computer, a desktop computer, etc., which is installed with a face image glasses removal program.
  • a terminal device with computing functions may be a rack server, a blade server, a tower server or a rack server, a smart phone, a tablet computer, a portable computer, a desktop computer, etc., which is installed with a face image glasses removal program.
  • the electronic device 1 includes a memory 11, a processor 12, an imaging device 13, a network interface 14, and a communication bus 15.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, or the like.
  • the memory 11 may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital ( Secure Digital, SD) cards, flash cards, etc.
  • SMC smart memory card
  • Secure Digital Secure Digital
  • the readable storage medium of the memory 11 is generally used to store the face image glasses removal program 10 and various types of data and the like installed in the electronic device 1.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a face image. Glasses removal procedure 10, etc.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a face image. Glasses removal procedure 10, etc.
  • the imaging device 13 may be part of the electronic device 1 or may be independent of the electronic device 1.
  • the electronic device 1 is a terminal device having a camera such as a smartphone, a tablet computer, a portable computer, etc.
  • the camera device 13 is a camera of the electronic device 1.
  • the electronic device 1 may be a server, and the camera device 13 is connected to the electronic device 1 via a network, for example, the camera device 13 is installed in a specific place, such as an office. And monitoring the area, real-time image is taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • the network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to establish a communication connection between the electronic device 1 and other electronic devices.
  • a standard wired interface such as a WI-FI interface
  • Communication bus 15 is used to implement connection communication between these components.
  • Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, etc., optionally, the user interface may further include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
  • a display may also be appropriately referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the human face image glasses removal program 10 is stored in the memory 11.
  • the processor 12 executes the face image glasses removal program 10 stored in the memory 11, the following steps are implemented:
  • a real-time image acquisition step acquiring a real-time image captured by the camera device 13 and extracting a face image from the real-time image by using a face recognition algorithm;
  • Image preprocessing step normalizing the face image, and performing face gesture correction using affine transformation to obtain a frontal face image;
  • the glasses removing step determining a glasses area in the frontal face image, searching for the pixel area around the eyeglass area in the frontal face image to fill the eyeglass area, and obtaining a face image of the removed glasses.
  • the camera device 13 When the camera device 13 captures a real-time image, the camera device 13 transmits the real-time image to the processor.
  • the face recognition algorithm extracts the real-time face image.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • the captured images often have the disadvantages of noise and low contrast.
  • the distance, the focal length, and the like make the size and position of the face in the middle of the entire image uncertain.
  • face correction, face image enhancement, and normalization on the face image. Its main purpose is to eliminate irrelevant information in the image, filter out interference, noise, restore useful real information, enhance the detectability of relevant information and minimize data, thereby improving the reliability of feature analysis.
  • the face support is to obtain a positive face image with a correct face position
  • the commonly used method of face correction is to perform posture correction on the face in the face image by using affine transformation
  • Face pose correction is a mature calculation method and will not be described here.
  • the image enhancement is to improve the quality of the face image, not only to visually sharpen the image, but also to make the image more conducive to computer processing and recognition.
  • the goal of the normalization work is to obtain a standardized frontal face image of the same size and the same range of gray values.
  • the glasses area determining step includes the following refinement steps:
  • Binarization processing step converting the frontal face image into a grayscale image, and the grayscale image Performing binarization to obtain a binarized image;
  • An edge detecting step performing edge detection on the grayscale image to obtain an edge image, and performing an area filling operation on the edge image to obtain an edge-filled image;
  • the glasses area determining step determining the glasses area in the frontal face image according to a preset glasses area determination rule.
  • Image binarization is a necessary image preprocessing process prior to image analysis, feature extraction, and pattern recognition. The goal is to maximize the portion of interest in the image.
  • the normalized frontal face image A obtained by image preprocessing is subjected to gradation processing to obtain a grayscale image B, and the grayscale image B is binarized, for example, 128 is set as a preset grayscale threshold.
  • the pixels with the gray value greater than or equal to 128 are all set to 255 (pure white), and the pixels smaller than 128 are all set to 0 (pure black), and the binarized image C is obtained, and the whole image exhibits obvious black and white. effect.
  • edge detection is performed on the grayscale image B to obtain an edge image D.
  • the so-called edge refers to a collection of those pixels whose gray level changes sharply around the pixel, which is the most basic feature of the image, and the edge exists in the target, the background, and Between regions, so it is the most important basis on which image segmentation depends. Since the edge is a mark of position, it is not sensitive to changes in gray scale, and therefore, the edge is also an important feature of image matching.
  • the edge detection may be implemented by a Sobel operator, a Laplace operator, a Canny operator, or the like.
  • the edge image D obtained by the edge detection is subjected to region filling to obtain an edge-filled image E.
  • the specific filling algorithm may be a hole filling algorithm, etc., and details are not described herein.
  • the overlapping region being a plurality of closed regions, which may include a mouth, a nose, and an eye of a human face. , the eyebrows, and the like, determining the overlapping area in the frontal face image A, so that it is not possible to determine whether the frontal face image A includes the glasses area, so it is necessary to judge the overlapping area according to a preset determination rule, The area of the glasses in the frontal face image A is determined.
  • the glasses area determining step includes the following refinement steps:
  • a rectangular approximation operation is performed on the to-be-determined glasses area to obtain a minimum rectangle including the to-be-determined glasses area as the glasses area of the frontal face image.
  • the overlapping area may include the position of the mouth, nose, eyes, eyebrows, etc. of the face
  • the frontal face image A includes the glasses area according to the specific position where each overlapping area is located in the image. Because the frontal face image A is normalized, it can be determined that each of the overlapping regions is located in the frontal face image A according to the position of the respective overlapping regions in the vertical direction of the frontal face image A. The upper half or the lower half then retains the coincident area located in the upper half of the frontal face image A as the area to be determined.
  • NMS Non-Maximum-Suppression
  • each overlapping area is located in the lower half of the frontal face image A, or if the area of each overlapping area in the to-be-determined glasses area is less than a preset threshold, the overlapping area is considered not to be For the glasses area, that is, the front face image A does not include the glasses area, the next real-time image is continuously acquired.
  • the glasses removing step includes the following refinement steps:
  • a simple image repair algorithm can also be adopted, which can quickly and accurately remove the glasses in the face image, while retaining the detailed feature information of the human eye, and improving the accuracy of the face recognition.
  • the preset gray value threshold value and the preset threshold value of the area described in the foregoing embodiments need to be preset parameters, and the user can set according to the actual situation.
  • the electronic device proposed in this embodiment effectively removes the glasses in the face image, and retains the detailed features of most of the human eye portion, so that the subsequent face recognition accuracy is high.
  • the face image glasses removal program 10 can also be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • FIG. 2 it is a block diagram of the face image glasses removal program 10 of FIG.
  • the facial image glasses removal program 10 can be divided into: an acquisition module 110, an image processing module 120, a determination module 130, and a removal module 140, and the functions or operation steps implemented by the modules 110-140 are similar to the above. No more details here, exemplarily, for example:
  • the acquiring module 110 is configured to acquire a real-time image captured by the camera, and extract a face image from the real-time image by using a face recognition algorithm;
  • the image processing module 120 is configured to perform normalization processing on the face image, perform face posture correction using affine transformation, and obtain a frontal face image;
  • the determining module 130 is configured to determine whether the front face image includes a glasses area by performing binarization processing and edge detection on the frontal face image;
  • the removing module 140 is configured to determine a glasses area in the frontal face image, and find a pixel point around the glasses area in the frontal face image to fill the glasses area to obtain a face image of the removed glasses .
  • the present application also provides a method for removing a face image glasses.
  • FIG. 3 it is a flowchart of a preferred embodiment of the method for removing the face image glasses of the present applicant. The method can be performed by a device that can be implemented by software and/or hardware.
  • the method for removing the face image glasses includes:
  • Step S10 acquiring a real-time image captured by the camera device, and extracting a face image from the real-time image by using a face recognition algorithm;
  • Step S20 normalizing the face image, and performing face gesture correction using affine transformation to obtain a frontal face image
  • Step S30 determining whether the frontal face image includes a glasses area by performing binarization processing and edge detection on the frontal face image
  • Step S40 determining a glasses area in the frontal face image, searching for the pixel area around the glasses area in the frontal face image to fill the glasses area, and obtaining a face image of the removed glasses.
  • the camera When the camera captures a real-time image, the camera transmits the real-time image to the processor.
  • the face recognition algorithm extracts the real-time facial image.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • the captured images often have the disadvantages of noise and low contrast.
  • the distance, the focal length, and the like make the size and position of the face in the middle of the entire image uncertain.
  • face correction, face image enhancement, and normalization on the face image. Its main purpose is to eliminate irrelevant information in the image, filter out interference, noise, restore useful real information, enhance the detectability of relevant information and minimize data, thereby improving the reliability of feature analysis.
  • the face support is to obtain a positive face image with a correct face position
  • the commonly used method of face correction is to perform posture correction on the face in the face image by using affine transformation
  • Face pose correction is a mature calculation method and will not be described here.
  • the image enhancement is to improve the quality of the face image The amount is not only visually clearer, but also makes the image more conducive to computer processing and recognition.
  • the goal of the normalization work is to obtain a standardized frontal face image of the same size and the same range of gray values.
  • step S30 in the method for removing the face image glasses of the present applicant.
  • the step S30 includes the following refinement steps:
  • Step S31 converting the frontal face image into a grayscale image, performing binarization processing on the grayscale image to obtain a binarized image;
  • Step S32 performing edge detection on the grayscale image to obtain an edge image, and performing an area filling operation on the edge image to obtain an edge-filled image;
  • Step S33 projecting the binarized image to the edge-filled image to obtain a coincident region of the binarized image and the edge-filled image;
  • Step S34 determining a glasses area in the frontal face image according to a preset glasses area determination rule.
  • Image binarization is a necessary image preprocessing process prior to image analysis, feature extraction, and pattern recognition. The goal is to maximize the portion of interest in the image.
  • the normalized frontal face image A obtained by image preprocessing is subjected to gradation processing to obtain a grayscale image B, and the grayscale image B is binarized, for example, 128 is set as a preset grayscale threshold.
  • the pixels with the gray value greater than or equal to 128 are all set to 255 (pure white), and the pixels smaller than 128 are all set to 0 (pure black), and the binarized image C is obtained, and the whole image exhibits obvious black and white. effect.
  • edge detection is performed on the grayscale image B to obtain an edge image D.
  • the so-called edge refers to a collection of those pixels whose gray level changes sharply around the pixel, which is the most basic feature of the image, and the edge exists in the target, the background, and Between regions, so it is the most important basis on which image segmentation depends. Since the edge is a mark of position, it is not sensitive to changes in gray scale, and therefore, the edge is also an important feature of image matching.
  • the edge detection may be implemented by a Sobel operator, a Laplace operator, a Canny operator, or the like.
  • the edge image D obtained by the edge detection is subjected to region filling to obtain an edge-filled image E.
  • the specific filling algorithm may be a hole filling algorithm, etc., and details are not described herein.
  • the overlapping region being a plurality of closed regions, which may include a mouth, a nose, and an eye of a human face. , the eyebrows, and the like, determining the overlapping area in the frontal face image A, so that it is not possible to determine whether the frontal face image A includes the glasses area, so it is necessary to judge the overlapping area according to a preset determination rule, The area of the glasses in the frontal face image A is determined.
  • the determining the rule according to the preset glasses area includes:
  • a rectangular approximation operation is performed on the to-be-determined glasses area to obtain a minimum rectangle including the to-be-determined glasses area as the glasses area of the frontal face image.
  • the overlapping area may include the position of the mouth, nose, eyes, eyebrows, etc. of the face
  • the frontal face image A includes the glasses area according to the specific position where each overlapping area is located in the image. Because the frontal face image A is normalized, it can be determined that each of the overlapping regions is located in the frontal face image A according to the position of the respective overlapping regions in the vertical direction of the frontal face image A. The upper half or the lower half then retains the coincident area located in the upper half of the frontal face image A as the area to be determined.
  • the area of each overlapping area in the to-be-determined glasses area calculates the area of each overlapping area in the to-be-determined glasses area, and determine the area of each overlapping area in the area to be determined and the preset threshold S.
  • the area of the coincident area including the area of the eyeglass is inevitably larger than the coincident area including the eyebrows, the eyes, and the like. Therefore, the overlap area of the reserved area is larger than the preset threshold S, and the rectangular approximation operation is performed on the coincident area, and the rectangular shape is approximated.
  • the non-maximum value suppression (NMS) algorithm is performed to remove the small rectangle, and only the largest rectangle is retained, and the largest rectangle finally retained is the area of the glasses in the frontal face image A to be determined by the scheme.
  • each overlapping area is located in the lower half of the frontal face image A, or if the area of each overlapping area in the to-be-determined glasses area is less than a preset threshold, the overlapping area is considered not to be For the glasses area, that is, the front face image A does not include the glasses area, the next real-time image is continuously acquired.
  • step S40 in the method for removing the face image glasses of the present applicant.
  • the step S40 includes the following refinement steps:
  • a simple image repair algorithm can also be adopted, which can quickly and accurately remove the glasses in the face image, while retaining the detailed feature information of the human eye, and improving the accuracy of the face recognition.
  • the preset gray value threshold value and the preset threshold value of the area described in the foregoing embodiments need to be preset parameters, and the user can set according to the actual situation.
  • the method for removing the face image glasses proposed in this embodiment removes the glasses in the face image while retaining the detail features of most of the human eye portions, so that the subsequent face recognition accuracy is high.
  • the method of the embodiment is adopted, and even if a face image that is misidentified as wearing glasses appears, the method of the embodiment can be used to remove the person. Dark circles or bags under the eyes, so that such face images can be accurately identified Come.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a face image glasses removal program, and when the face image glasses removal program is executed by the processor, the following operations are implemented:
  • a real-time image acquisition step acquiring a real-time image captured by the camera device, and extracting a face image from the real-time image by using a face recognition algorithm;
  • Image preprocessing step normalizing the face image, and performing face gesture correction using affine transformation to obtain a frontal face image;
  • the glasses removing step determining a glasses area in the frontal face image, searching for the pixel area around the eyeglass area in the frontal face image to fill the eyeglass area, and obtaining a face image of the removed glasses.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提出一种人脸图像眼镜去除方法,该方法包括:获取摄像装置拍摄到的一张实时图像,从该实时图像中提取一张人脸图像;对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及,确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。本申请还提出一种电子装置及一种计算机可读存储介质。本申请在去除人脸图像中的眼镜,生成无眼镜的人脸图像的同时,保留了人脸图像中原有的人脸特征,提高人脸识别的识别率。

Description

人脸图像眼镜去除方法、装置及存储介质
优先权申明
本申请基于巴黎公约申明享有2017年9月26日递交的申请号为CN 201710885235.7、名称为“人脸图像眼镜去除方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及计算机视觉处理技术领域,尤其涉及一种人脸图像眼镜去除方法、装置及计算机可读存储介质。
背景技术
在人脸识别领域,由于很多人戴眼镜,尤其是戴深框眼镜,导致在人脸识别时,带深框眼镜的人脸图像相似度较高,无法进行准确的人脸识别。
目前业内采用的人脸图像眼镜去除方案,是使用二维广义主成分分析法,该方法利用无眼镜人脸图像训练特征空间,对戴眼镜的人脸进行重建,通过与输入的原始人脸图像的对比,提取出眼镜遮挡区域,再通过误差迭代的方式对图像进行误差补偿,合成最终的无眼镜的人脸图像。这种方法对于与训练图像相近的输入图像效果较好,但需要一定时间和一定数量的图片进行训练,而且对于与训练图像出入较大的输入图像,虽然消除了人脸图像中的眼镜,但对人脸特征破坏比较严重,进而无法进行准确的人脸识别。
发明内容
有鉴于此,本申请提供一种人脸图像眼镜去除方法、装置及计算机可读存储介质,其主要目的在于在不破坏人脸图像中原有人脸特征的前提下,去除人脸图像中的眼镜,生成无眼镜的人脸图像,提高人脸识别的识别率。
为实现上述目的,本申请提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸图像眼镜去除程序,该眼镜去除程序被所述处理器执行时实现如下步骤:
实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
优选地,所述眼镜去除步骤包括:
在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
优选地,所述眼镜区域判断步骤包括:
二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
优选地,所述眼镜区域确定步骤包括:
从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
此外,为实现上述目的,本申请还提供一种人脸图像眼镜去除方法,该方法包括:
实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
优选地,所述眼镜去除步骤包括:
在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中 心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
优选地,所述眼镜区域判断步骤包括:
二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
优选地,所述眼镜区域确定步骤包括:
从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人脸图像眼镜去除程序,该眼镜去除程序被处理器执行时,实现如上所述的人脸图像眼镜去除方法中的任意步骤。
相较于现有技术,本申请提出的人脸图像眼镜去除方法、电子装置及计算机可读存储介质,首先,通过对人脸图像进行二值化处理及边缘检测,分别得到两张图像,确定两张图像的重合区域,然后,根据所述重合区域的位置及面积,确定眼镜区域,最后,从人脸图像中寻找所述眼镜区域周围的像素点信息,将所述眼镜区域的像素点信息替换为所述眼镜区域周围的像素点信息,从而得到去除眼镜的人脸图像。这样,既节省了模型训练的时间,又在不破坏人脸图像中原有的人脸特征的前提下,有效去除了人脸图像中的眼镜。
附图说明
图1为本申请人脸图像眼镜去除方法较佳实施例的应用环境示意图;
图2为图1中人脸图像眼镜去除程序的模块示意图;
图3为本申请人脸图像眼镜去除方法较佳实施例的流程图;
图4为本申请人脸图像眼镜去除方法中步骤S30的细化流程图;
图5为本申请人脸图像眼镜去除方法中步骤S40的细化流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供一种人脸图像眼镜去除方法,应用于电子装置1。参照图1所示,为本申请人脸图像眼镜去除方法较佳实施例的应用环境示意图。
在本实施例中,电子装置1可以是安装有人脸图像眼镜去除程序的机架式服务器、刀片式服务器、塔式服务器或机柜式服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。
该电子装置1包括:存储器11、处理器12、摄像装置13、网络接口14及通信总线15。
其中,存储器11至少包括一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等的非易失性存储介质。在一些实施例中,存储器11可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,存储器11也可以是所述电子装置1的外部存储设备,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的人脸图像眼镜去除程序10及各类数据等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行人脸图像眼镜去除程序10等。
摄像装置13既可以是所述电子装置1的一部分,也可以独立于电子装置1。在一些实施例中,所述电子装置1为智能手机、平板电脑、便携计算机等具有摄像头的终端设备,则所述摄像装置13即为所述电子装置1的摄像头。在其他实施例中,所述电子装置1可以为服务器,所述摄像装置13独立于该电子装置1、与该电子装置1通过网络连接,例如,该摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。
网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子装置1与其他电子设备之间建立通信连接。
通信总线15用于实现这些组件之间的连接通信。
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)等,可选地用户接口还可以包括标准的有线接口、无线接口。
可选地,该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。
在图1所示的装置实施例中,存储器11中存储有人脸图像眼镜去除程序10。处理器12执行存储器11中存储的人脸图像眼镜去除程序10时实现如下步骤:
实时图像获取步骤:获取摄像装置13拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器,当处理器接收到该实时图像后,利用人脸识别算法提取出实时的脸部图像。具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。
由于图像采集环境的不同,如光照明暗程度以及设备性能的优劣等,采集的图像往往存在有噪声、对比度低等缺点。另外,距离远近,焦距大小等又使得人脸在整幅图像中间的大小和位置不确定。为了保证人脸图像中人脸大小,位置以及人脸图像质量的一致性,必须对人脸图像进行人脸扶正,人脸图像的增强,以及归一化等工作。其主要目的在于消除图像中无关的信息,滤除干扰、噪声,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征分析的可靠性。其中,所述人脸扶正是为了得到人脸位置端正的正面人脸图像,常用的人脸扶正的方法是利用仿射变换对人脸图像中的人脸进行姿态校正,关于利用仿射变换进行人脸姿态校正已经是成熟的计算方法,在此不再赘述。所述图像增强是为了改善人脸图像的质量,不仅在视觉上更加清晰图像,而且使图像更利于计算机的处理与识别。所述归一化工作的目标是取得尺寸一致,灰度取值范围相同的标准化的正面人脸图像。
具体地,所述眼镜区域判断步骤包括以下细化步骤:
二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图 像进行二值化处理得到二值化图像;
边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
在进行图像分析、特征提取与模式识别之前,图像二值化是必要的图像预处理过程,其目的是最大限度的将图象中感兴趣的部分保留下来。首先,将上述经过图像预处理得到的标准化的正面人脸图像A进行灰度处理,得到灰度图像B,将灰度图像B进行二值化处理,例如,设置128为预设灰度值阈值,那么灰度值大于等于128的像素点全部被置为255(纯白),小于128的像素点全部被置为0(纯黑),得到二值化图像C,整个图像呈现出明显的黑白效果。
接下来,对上述灰度图像B进行边缘检测,得到边缘图像D,所谓边缘是指其周围像素灰度急剧变化的那些像素的集合,它是图像最基本的特征,边缘存在于目标、背景和区域之间,所以,它是图像分割所依赖的最重要的依据。由于边缘是位置的标志,对灰度的变化不敏感,因此,边缘也是图像匹配的重要特征。具体地,所述边缘检测可通过Sobel算子、Laplace算子、Canny算子等来实现。然后对边缘检测后得到的边缘图像D进行区域填充得到边缘填充图像E,具体填充算法可为孔洞填充算法等,这里不再赘述。
将上述二值化图像C往上述边缘填充图像E上进行投影,可以得到这两张图像相互重合的重合区域,所述重合区域为多个闭合的区域,可能包括人脸的嘴巴、鼻子、眼睛、眉毛等,在所述正面人脸图像A中确定所述重合区域,这样还无法确定正面人脸图像A中是否包含眼镜区域,故需要根据预设的判断规则对上述重合区域进行判断,以确定正面人脸图像A中的眼镜区域。
具体地,所述眼镜区域确定步骤包括以下细化步骤:
从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;
若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
鉴于所述重合区域可能包括人脸的嘴巴、鼻子、眼睛、眉毛等位置,首先,需要根据各个重合区域位于图像中的具体位置,初步确定所述正面人脸图像A中是否包含眼镜区域。因为所述正面人脸图像A经过了规范化处理,那么可以根据所述各个重合区域在所述正面人脸图像A中垂直方向上的位置,判断各个重合区域位于所述正面人脸图像A中的上半部还是下半部,然后保留位于所述正面人脸图像A中上半部的重合区域作为待确定眼镜区域。
接下来,为了剔除包含眉毛、眼睛或者黑眼圈等的重合区域,计算上述待确定眼镜区域中各个重合区域的面积,并判断待确定眼镜区域中各重合区域面积与预设阈值S的大小,可以理解的是,包含眼镜区域的重合区域面积必然大于包含眉毛、眼睛等的重合区域,因此,保留面积大于预设阈值S的重合区域,并对该重合区域做长方形逼近运算,对逼近后的长方形进行非极大值抑制(Non-Maximum-Suppression,NMS)算法,去除小的长方形,只保留最大的长方形,最终保留的最大长方形即为本方案要确定的正面人脸图像A中的眼镜区域。
需要说明的是,若各个重合区域均位于所述正面人脸图像A的下半部,或者,所述待确定眼镜区域中的各个重合区域的面积均小于预设阈值,都认为该重合区域不为眼镜区域,也就是说所述正面人脸图像A中不包含眼镜区域,继续获取下一张实时图像。
具体地,所述眼镜去除步骤包括以下细化步骤:
在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
在所述正面人脸图像A中确定通过以上步骤得到的眼镜区域,从该眼镜区域中选出代表眼镜边框的重合区域,通过寻找该眼镜边框周围的像素点的像素点信息(即,皮肤色)对所述正面人脸图像A中的眼镜边框进行填充,得到去除眼镜的正面人脸图像。
在其他实施例中,也可采用简单的图像修复算法,可以快速、准确地对人脸图像中的眼镜进行去除,同时保留人眼的细节特征信息,提高人脸识别的准确度。
可以理解的是,上述各实施例中所述的预设灰度值阈值及面积的预设阈值等需要预先设置的参数,用户可以根据实际情况进行设置。
本实施例提出的电子装置,有效去除了人脸图像中的眼镜,保留了绝大部分人眼部分的细节特征,使得后续的人脸识别准确度较高。
在其他实施例中,人脸图像眼镜去除程序10还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。例如,参照图2所示,为图1中人脸图像眼镜去除程序10的模块示意图。
所述人脸图像眼镜去除程序10可以被分割为:获取模块110、图像处理模块120、判断模块130及去除模块140,所述模块110-140所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:
获取模块110,用于获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
图像处理模块120,用于对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
判断模块130,用于通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
去除模块140,用于确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
此外,本申请还提供一种人脸图像眼镜去除方法。参照图3所示,为本申请人脸图像眼镜去除方法较佳实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,人脸图像眼镜去除方法包括:
步骤S10,获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
步骤S20,对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
步骤S30,通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
步骤S40,确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
当摄像装置拍摄到一张实时图像,摄像装置将这张实时图像发送到处理器,当处理器接收到该实时图像后,利用人脸识别算法提取出实时的脸部图像。具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。
由于图像采集环境的不同,如光照明暗程度以及设备性能的优劣等,采集的图像往往存在有噪声、对比度低等缺点。另外,距离远近,焦距大小等又使得人脸在整幅图像中间的大小和位置不确定。为了保证人脸图像中人脸大小,位置以及人脸图像质量的一致性,必须对人脸图像进行人脸扶正,人脸图像的增强,以及归一化等工作。其主要目的在于消除图像中无关的信息,滤除干扰、噪声,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征分析的可靠性。其中,所述人脸扶正是为了得到人脸位置端正的正面人脸图像,常用的人脸扶正的方法是利用仿射变换对人脸图像中的人脸进行姿态校正,关于利用仿射变换进行人脸姿态校正已经是成熟的计算方法,在此不再赘述。所述图像增强是为了改善人脸图像的质 量,不仅在视觉上更加清晰图像,而且使图像更利于计算机的处理与识别。所述归一化工作的目标是取得尺寸一致,灰度取值范围相同的标准化的正面人脸图像。
具体地,参照图4所示,为本申请人脸图像眼镜去除方法中步骤S30的细化流程图。所述步骤S30包括以下细化步骤:
步骤S31,将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
步骤S32,对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
步骤S33,将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
步骤S34,根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
在进行图像分析、特征提取与模式识别之前,图像二值化是必要的图像预处理过程,其目的是最大限度的将图象中感兴趣的部分保留下来。首先,将上述经过图像预处理得到的标准化的正面人脸图像A进行灰度处理,得到灰度图像B,将灰度图像B进行二值化处理,例如,设置128为预设灰度值阈值,那么灰度值大于等于128的像素点全部被置为255(纯白),小于128的像素点全部被置为0(纯黑),得到二值化图像C,整个图像呈现出明显的黑白效果。
接下来,对上述灰度图像B进行边缘检测,得到边缘图像D,所谓边缘是指其周围像素灰度急剧变化的那些像素的集合,它是图像最基本的特征,边缘存在于目标、背景和区域之间,所以,它是图像分割所依赖的最重要的依据。由于边缘是位置的标志,对灰度的变化不敏感,因此,边缘也是图像匹配的重要特征。具体地,所述边缘检测可通过Sobel算子、Laplace算子、Canny算子等来实现。然后对边缘检测后得到的边缘图像D进行区域填充得到边缘填充图像E,具体填充算法可为孔洞填充算法等,这里不再赘述。
将上述二值化图像C往上述边缘填充图像E上进行投影,可以得到这两张图像相互重合的重合区域,所述重合区域为多个闭合的区域,可能包括人脸的嘴巴、鼻子、眼睛、眉毛等,在所述正面人脸图像A中确定所述重合区域,这样还无法确定正面人脸图像A中是否包含眼镜区域,故需要根据预设的判断规则对上述重合区域进行判断,以确定正面人脸图像A中的眼镜区域。
具体地,所述根据预设的眼镜区域判断规则包括:
从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;
若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
鉴于所述重合区域可能包括人脸的嘴巴、鼻子、眼睛、眉毛等位置,首先,需要根据各个重合区域位于图像中的具体位置,初步确定所述正面人脸图像A中是否包含眼镜区域。因为所述正面人脸图像A经过了规范化处理,那么可以根据所述各个重合区域在所述正面人脸图像A中垂直方向上的位置,判断各个重合区域位于所述正面人脸图像A中的上半部还是下半部,然后保留位于所述正面人脸图像A中上半部的重合区域作为待确定眼镜区域。
接下来,为了剔除包含眉毛、眼睛或者黑眼圈等的重合区域,计算上述待确定眼镜区域中各个重合区域的面积,并判断待确定眼镜区域中各重合区域面积与预设阈值S的大小,可以理解的是,包含眼镜区域的重合区域面积必然大于包含眉毛、眼睛等的重合区域,因此,保留面积大于预设阈值S的重合区域,并对该重合区域做长方形逼近运算,对逼近后的长方形进行非极大值抑制(NMS)算法,去除小的长方形,只保留最大的长方形,最终保留的最大长方形即为本方案要确定的正面人脸图像A中的眼镜区域。
需要说明的是,若各个重合区域均位于所述正面人脸图像A的下半部,或者,所述待确定眼镜区域中的各个重合区域的面积均小于预设阈值,都认为该重合区域不为眼镜区域,也就是说所述正面人脸图像A中不包含眼镜区域,继续获取下一张实时图像。
具体地,参照图5所示,为本申请人脸图像眼镜去除方法中步骤S40的细化流程图。所述步骤S40包括以下细化步骤:
在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
在所述正面人脸图像A中确定通过以上步骤得到的眼镜区域,从该眼镜区域中选出代表眼镜边框的重合区域,通过寻找该眼镜边框周围的像素点的像素点信息(即,皮肤色)对所述正面人脸图像A中的眼镜边框进行填充,得到去除眼镜的正面人脸图像。
在其他实施例中,也可采用简单的图像修复算法,可以快速、准确地对人脸图像中的眼镜进行去除,同时保留人眼的细节特征信息,提高人脸识别的准确度。
可以理解的是,上述各实施例中所述的预设灰度值阈值及面积的预设阈值等需要预先设置的参数,用户可以根据实际情况进行设置。
本实施例提出的人脸图像眼镜去除方法,去除人脸图像中的眼镜,同时保留了绝大部分人眼部分的细节特征,使得后续的人脸识别准确度较高。另外,对于一些有较重的黑眼圈或眼袋的人脸图像,采用本实施例的方法,即便会出现被误识别为戴眼镜的人脸图像,但是,使用本实施例的方法,可以去除人眼上的黑眼圈或眼袋等,以使这类的人脸图像,能够被准确的识别出 来。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括人脸图像眼镜去除程序,所述人脸图像眼镜去除程序被处理器执行时实现如下操作:
实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
本申请之计算机可读存储介质的具体实施方式与上述人脸图像眼镜去除方法的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种电子装置,其特征在于,所述装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸图像眼镜去除程序,该眼镜去除程序被所述处理器执行时实现如下步骤:
    实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
    图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
    眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
    眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
  2. 根据权利要求1所述的电子装置,其特征在于,所述眼镜去除步骤包括:
    在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
    以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
  3. 根据权利要求2所述的电子装置,其特征在于,所述眼镜区域判断步骤包括:
    二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
    边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
    投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
    眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
  4. 根据权利要求3所述的电子装置,其特征在于,所述眼镜区域确定步骤包括:
    从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
    若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进 行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
  5. 根据权利要求1所述的电子装置,其特征在于,所述人脸识别算法可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。
  6. 一种人脸图像眼镜去除方法,应用于电子装置,其特征在于,所述方法包括:
    实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
    图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
    眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
    眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
  7. 根据权利要求6所述的人脸图像眼镜去除方法,其特征在于,所述眼镜去除步骤包括:
    在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
    以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
  8. 根据权利要求7所述的人脸图像眼镜去除方法,其特征在于,所述眼镜区域判断步骤包括:
    二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
    边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
    投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
    眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
  9. 根据权利要求8所述的人脸图像眼镜去除方法,其特征在于,所述眼 镜区域确定步骤包括:
    从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
    若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
  10. 根据权利要求6所述的人脸图像眼镜去除方法,其特征在于,所述人脸识别算法可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人脸图像眼镜去除程序,该眼镜去除程序被处理器执行时,实现如下步骤:
    实时图像获取步骤:获取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
    图像预处理步骤:对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
    眼镜区域判断步骤:通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
    眼镜去除步骤:确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
  12. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述眼镜去除步骤包括:
    在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
    以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
  13. 根据权利要求12所述的计算机可读存储介质,其特征在于,所述眼镜区域判断步骤包括:
    二值化处理步骤:将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
    边缘检测步骤:对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
    投影步骤:将所述二值化图像往所述边缘填充图像进行投影,得到所述 二值化图像、边缘填充图像的重合区域;及
    眼镜区域确定步骤:根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
  14. 根据权利要求13所述的计算机可读存储介质,其特征在于,所述眼镜区域确定步骤包括:
    从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
    若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
  15. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述人脸识别算法可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。
  16. 一种人脸图像眼镜去除程序,其特征在于,该眼镜去除程序包括:
    获取模块,用于取摄像装置拍摄到的一张实时图像,利用人脸识别算法从该实时图像中提取一张人脸图像;
    图像处理模块,用于对所述人脸图像进行规范化处理,使用仿射变换进行人脸姿态校正,得到一张正面人脸图像;
    判断模块,用于通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸图像中是否包含眼镜区域;及
    去除模块,用于确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像。
  17. 根据权利要求16所述的人脸图像眼镜去除程序,其特征在于,所述“确定所述正面人脸图像中的眼镜区域,在所述正面人脸图像中寻找所述眼镜区域周围的像素点对所述眼镜区域进行填充,得到去除眼镜的人脸图像”的步骤包括:
    在所述正面人脸图像中确定所述眼镜区域,并确定所述眼镜区域中的眼镜边框;及
    以所述眼镜边框的每一个像素点为中心像素点,根据所述中心像素点周围像素点的像素点信息,计算所述中心像素点的新像素点信息,替换所述中心像素点的原像素点信息,得到去除眼镜的正面人脸图像。
  18. 根据权利要求17所述的人脸图像眼镜去除程序,其特征在于,所述“通过对所述正面人脸图像进行二值化处理、边缘检测,判断所述正面人脸 图像中是否包含眼镜区域”的步骤包括:
    将所述正面人脸图像转换为灰度图像,对所述灰度图像进行二值化处理得到二值化图像;
    对所述灰度图像进行边缘检测得到边缘图像,对所述边缘图像进行区域填充运算得到边缘填充图像;
    将所述二值化图像往所述边缘填充图像进行投影,得到所述二值化图像、边缘填充图像的重合区域;及
    根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域。
  19. 根据权利要求18所述的人脸图像眼镜去除程序,其特征在于,所述“根据预设的眼镜区域判断规则,确定所述正面人脸图像中的眼镜区域”的步骤包括:
    从所述重合区域中截取位于所述正面人脸图像上半部的部分,作为待确定眼镜区域;及
    若所述待确定的眼镜区域面积大于预设阈值,对所述待确定眼镜区域进行长方形逼近运算,得到包含所述待确定眼镜区域的最小长方形,作为所述正面人脸图像的眼镜区域。
  20. 根据权利要求16所述的人脸图像眼镜去除程序,其特征在于,所述人脸识别算法可以为:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。
PCT/CN2017/108758 2017-09-26 2017-10-31 人脸图像眼镜去除方法、装置及存储介质 WO2019061659A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710885235.7A CN107844742B (zh) 2017-09-26 2017-09-26 人脸图像眼镜去除方法、装置及存储介质
CN2017108852357 2017-09-26

Publications (1)

Publication Number Publication Date
WO2019061659A1 true WO2019061659A1 (zh) 2019-04-04

Family

ID=61661758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108758 WO2019061659A1 (zh) 2017-09-26 2017-10-31 人脸图像眼镜去除方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN107844742B (zh)
WO (1) WO2019061659A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001207A (zh) * 2019-05-27 2020-11-27 北京君正集成电路股份有限公司 一种人脸识别样本库的优化方法
CN113743195A (zh) * 2021-07-23 2021-12-03 北京眼神智能科技有限公司 人脸遮挡定量分析方法、装置、电子设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875549B (zh) * 2018-04-20 2021-04-09 北京旷视科技有限公司 图像识别方法、装置、系统及计算机存储介质
CN110519515B (zh) * 2019-08-28 2021-03-19 联想(北京)有限公司 一种信息处理方法及电子设备
CN111145334B (zh) * 2019-11-14 2022-04-12 清华大学 戴眼镜人脸图像眼镜三维重建方法及装置
CN113627394B (zh) * 2021-09-17 2023-11-17 平安银行股份有限公司 人脸提取方法、装置、电子设备及可读存储介质
CN115810214B (zh) * 2023-02-06 2023-05-12 广州市森锐科技股份有限公司 基于ai人脸识别核验管理方法、系统、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070177793A1 (en) * 2006-01-31 2007-08-02 Fuji Photo Film Co., Ltd. Method and apparatus for automatic eyeglasses detection and removal
CN104156700A (zh) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 基于活动形状模型和加权插值法的人脸图像眼镜去除方法
CN105046250A (zh) * 2015-09-06 2015-11-11 广州广电运通金融电子股份有限公司 人脸识别的眼镜消除方法
CN106909882A (zh) * 2017-01-16 2017-06-30 广东工业大学 一种应用于保安机器人的人脸识别系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242640A (ja) * 2004-02-26 2005-09-08 Fuji Photo Film Co Ltd 対象物検出方法および装置並びにプログラム
US7657086B2 (en) * 2006-01-31 2010-02-02 Fujifilm Corporation Method and apparatus for automatic eyeglasses detection using a nose ridge mask
CN104408426B (zh) * 2014-11-27 2018-07-24 小米科技有限责任公司 人脸图像眼镜去除方法及装置
CN106503644B (zh) * 2016-10-19 2019-05-28 西安理工大学 基于边缘投影及颜色特征的眼镜属性检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070177793A1 (en) * 2006-01-31 2007-08-02 Fuji Photo Film Co., Ltd. Method and apparatus for automatic eyeglasses detection and removal
CN104156700A (zh) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 基于活动形状模型和加权插值法的人脸图像眼镜去除方法
CN105046250A (zh) * 2015-09-06 2015-11-11 广州广电运通金融电子股份有限公司 人脸识别的眼镜消除方法
CN106909882A (zh) * 2017-01-16 2017-06-30 广东工业大学 一种应用于保安机器人的人脸识别系统及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUO, PEI: "Eyeglass Removal and Region Recovery in Face Image", CHINESE MASTER'S THESES FULL-TEXT DATABASE (INFORMATION SCIENCE AND TECHNOLOGY, 15 August 2015 (2015-08-15), pages 8 - 10 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001207A (zh) * 2019-05-27 2020-11-27 北京君正集成电路股份有限公司 一种人脸识别样本库的优化方法
CN112001207B (zh) * 2019-05-27 2024-05-28 北京君正集成电路股份有限公司 一种人脸识别样本库的优化方法
CN113743195A (zh) * 2021-07-23 2021-12-03 北京眼神智能科技有限公司 人脸遮挡定量分析方法、装置、电子设备及存储介质
CN113743195B (zh) * 2021-07-23 2024-05-17 北京眼神智能科技有限公司 人脸遮挡定量分析方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107844742A (zh) 2018-03-27
CN107844742B (zh) 2019-01-04

Similar Documents

Publication Publication Date Title
WO2019061659A1 (zh) 人脸图像眼镜去除方法、装置及存储介质
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
CN106446873B (zh) 人脸检测方法及装置
WO2019061658A1 (zh) 眼镜定位方法、装置及存储介质
WO2018176938A1 (zh) 红外光斑中心点提取方法、装置和电子设备
CN108764071B (zh) 一种基于红外和可见光图像的真实人脸检测方法及装置
CN107368806B (zh) 图像矫正方法、装置、计算机可读存储介质和计算机设备
JP6688277B2 (ja) プログラム、学習処理方法、学習モデル、データ構造、学習装置、および物体認識装置
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
KR20180109665A (ko) 객체 검출을 위한 영상 처리 방법 및 장치
EP3666177B1 (en) Electronic device for determining degree of conjunctival hyperemia
US20120275665A1 (en) Method of generating a normalized digital image of an iris of an eye
Lee et al. Vasir: an open-source research platform for advanced iris recognition technologies
JP6822482B2 (ja) 視線推定装置、視線推定方法及びプログラム記録媒体
US20140301608A1 (en) Chemical structure recognition tool
JP6071002B2 (ja) 信頼度取得装置、信頼度取得方法および信頼度取得プログラム
CN110705353A (zh) 基于注意力机制的遮挡人脸的识别方法和装置
US11315360B2 (en) Live facial recognition system and method
CN107944395B (zh) 一种基于神经网络验证人证合一的方法及系统
CN113435408A (zh) 人脸活体检测方法、装置、电子设备及存储介质
CN111259763A (zh) 目标检测方法、装置、电子设备及可读存储介质
US10395090B2 (en) Symbol detection for desired image reconstruction
CN112633221A (zh) 一种人脸方向的检测方法及相关装置
CN111935480B (zh) 一种用于图像获取装置的检测方法及相关装置
WO2017219562A1 (zh) 一种二维码生成方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17927747

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 28.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17927747

Country of ref document: EP

Kind code of ref document: A1