WO2017031901A1 - 人脸识别方法、装置及终端 - Google Patents

人脸识别方法、装置及终端 Download PDF

Info

Publication number
WO2017031901A1
WO2017031901A1 PCT/CN2015/099696 CN2015099696W WO2017031901A1 WO 2017031901 A1 WO2017031901 A1 WO 2017031901A1 CN 2015099696 W CN2015099696 W CN 2015099696W WO 2017031901 A1 WO2017031901 A1 WO 2017031901A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
sub
processed
classifier
Prior art date
Application number
PCT/CN2015/099696
Other languages
English (en)
French (fr)
Inventor
陈志军
汪平仄
秦秋平
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to RU2017102521A priority Critical patent/RU2664688C2/ru
Priority to KR1020167015669A priority patent/KR20170033805A/ko
Priority to MX2017008481A priority patent/MX2017008481A/es
Priority to JP2016567408A priority patent/JP6374986B2/ja
Publication of WO2017031901A1 publication Critical patent/WO2017031901A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present disclosure relates to the field of image processing technologies, and in particular, to a face recognition method, apparatus, and terminal.
  • a ratio between various organs such as a person's eyes and a nose can be used as a face feature.
  • a ratio between various organs such as a person's eyes and a nose
  • the face classifier may not recognize. The situation of the face.
  • the present disclosure provides a face recognition method, apparatus, and terminal.
  • a face recognition method comprising:
  • a face in the original image is determined according to the result of the face recognition.
  • the adding a pixel of a specified value to an edge region of the original image includes:
  • a pixel of a specified value is added to the edge region where the face color pixel is present.
  • the performing face recognition on the to-be-processed image includes:
  • a pre-trained adaptive enhanced face classifier is used to determine whether each sub-image is a human face sub-image.
  • the acquiring the multiple sub-images of the to-be-processed image includes:
  • the image to be processed is traversed multiple times using a sliding window, and the image area covered by the sliding window at each position is used as a sub-image of the image to be processed each time traversing, wherein the traversing the image twice The size of the sliding window used for the image to be processed is different; or,
  • the pre-trained adaptive enhanced face classifier is cascaded by a multi-level classifier. Determining whether each sub-image is a human face sub-image using a pre-trained adaptive enhanced face classifier includes:
  • the output result of all the classifiers identifies that the sub-image is a face sub-image, it is determined that the sub-image is a face sub-image.
  • the determining, according to the result of the face recognition, the face in the original image includes:
  • a face in the original image is determined according to the position of the sub-image of the face sub-image in the original image.
  • a face recognition device comprising:
  • An adding module configured to add a pixel of a specified value to an edge region of the original image acquired by the acquiring module, to obtain an image to be processed
  • An identification module configured to perform face recognition on the to-be-processed image obtained by the adding module
  • a determining module configured to determine a face in the original image according to the result of the face recognition of the identification module.
  • the adding module includes:
  • a first acquiring unit configured to acquire pixel values of respective pixels of an edge region of the original image
  • a first determining unit configured to determine, according to the pixel value of each pixel of the edge region and the preset face color pixel value obtained by the first acquiring unit, that an edge region of the face color pixel is present;
  • an adding unit configured to add a pixel of a specified value to an edge region of the presence face color pixel determined by the first determining unit.
  • the identifying module includes:
  • a second acquiring unit configured to acquire a plurality of sub-images of the image to be processed
  • a determining unit configured to determine, by using the pre-trained adaptive enhanced face classifier, whether each of the sub-images acquired by the second acquiring unit is a human face sub-image.
  • the second acquiring unit is configured to use the sliding window to traverse the image to be processed multiple times, The image area covered by the sliding window at each position as a sub-image of the image to be processed at each traversal, wherein the size of the sliding window used for traversing the image to be processed twice is different; or
  • the pre-trained adaptive enhanced face classifier is cascaded by a multi-level classifier. Determining, for determining, for any sub-image, starting from the first-level classifier of the pre-trained adaptive enhanced face classifier, determining whether the sub-image is a face sub-image until the adaptation The final classifier of the face classifier is enhanced; when the output results of all the classifiers identify that the sub-image is a face sub-image, the sub-image is determined to be a face sub-image.
  • the determining module includes:
  • a second determining unit configured to determine, when the sub-image of the face sub-image exists in the plurality of sub-images of the image to be processed, the position of the sub-image that is the face sub-image in the original image
  • a third determining unit configured to determine, according to the location of the sub-image of the human face sub-image in the original image, The face in the original image.
  • a terminal where the terminal includes:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • a face in the original image is determined according to the result of the face recognition.
  • the image to be processed is subjected to face recognition to determine the face in the original image. Since a certain number of pixels are added to the edge region of the original image during face recognition, by adding this portion of the pixel, the original image is expanded to ensure that when the face is located at the edge region of the original image, the extension is performed.
  • the sub-image including the human face can be found in the latter image, thereby ensuring that the face located in the edge region of the original image can be recognized, thereby improving the accuracy of the face recognition.
  • FIG. 1 is a flowchart of a face recognition method according to an exemplary embodiment.
  • FIG. 2 is a flowchart of a face recognition method according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of an original image, according to an exemplary embodiment.
  • FIG. 4 is a schematic diagram of an image to be processed, according to an exemplary embodiment.
  • FIG. 5 is a schematic diagram of traversing a to-be-processed image using sliding windows of different sizes, according to an exemplary embodiment.
  • FIG. 6 is a schematic diagram of a plurality of sizes of images to be processed, according to an exemplary embodiment.
  • FIG. 7 is a schematic diagram of an Adaboost face classifier according to an exemplary embodiment.
  • FIG. 8 is a block diagram of a face recognition device, according to an exemplary embodiment.
  • FIG. 9 is a block diagram of an add module, according to an exemplary embodiment.
  • FIG. 10 is a block diagram of an identification module, according to an exemplary embodiment.
  • FIG. 11 is a block diagram of a determination module, according to an exemplary embodiment.
  • FIG. 12 is a block diagram of a terminal, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a face recognition method according to an exemplary embodiment, and a face recognition method is used in a terminal. As shown in FIG. 1 , the face recognition method provided by the embodiment of the present disclosure includes the following steps.
  • step S101 the original image is acquired.
  • step S102 pixels of a specified value are added to the edge region of the original image to obtain an image to be processed.
  • step S103 face recognition is performed on the image to be processed.
  • step S104 the face in the original image is determined based on the result of the face recognition.
  • the method provided by the embodiment of the present disclosure performs face recognition on the image to be processed by adding pixels of a specified value to the edge region of the original image to obtain a face to be processed to determine a face in the original image. Since a certain number of pixels are added to the edge region of the original image during face recognition, by adding this portion of the pixel, the original image is expanded to ensure that when the face is located at the edge region of the original image, the extension is performed.
  • the sub-image including the human face can be found in the latter image, thereby ensuring that the face located in the edge region of the original image can be recognized, thereby improving the accuracy of the face recognition.
  • adding a pixel of a specified value to an edge region of the original image includes:
  • a pixel of a specified value is added to an edge region where a face color pixel is present.
  • the face to be processed is subjected to face recognition, including:
  • a pre-trained adaptive enhanced face classifier is used to determine whether each sub-image is a human face sub-image.
  • acquiring a plurality of sub-images of the image to be processed includes:
  • the image to be processed is traversed multiple times using a sliding window, and the image area covered by the sliding window at each position is used as a sub-image of the image to be processed each time traversing, wherein the sliding window used for the image to be processed is traversed twice Different sizes; or,
  • the image to be processed is multi-scaled to obtain a plurality of sizes of images to be processed; for any size of the image to be processed, the image to be processed of the size is cropped into image regions of a plurality of specified sizes, and each image region is taken as a sub-image image.
  • the pre-trained adaptive enhanced face classifier is cascaded by a multi-level classifier, and the pre-trained adaptive enhanced face classifier is used to determine whether each sub-image is a face sub-image, including:
  • any sub-image starting from the first-level classifier of the pre-trained adaptive enhanced face classifier, it is determined step by step whether the sub-image is a face sub-image until the last-level classifier of the adaptive enhanced face classifier;
  • the output result of all the classifiers identifies that the sub-image is a face sub-image, it is determined that the sub-image is a face sub-image.
  • determining a face in the original image based on the result of the face recognition includes:
  • the face in the original image is determined based on the position of the sub-image of the face sub-image in the original image.
  • FIG. 2 is a flowchart of a face recognition method according to an exemplary embodiment, and a face recognition method is used in a terminal. As shown in FIG. 2, the face recognition method provided by the embodiment of the present disclosure includes the following steps.
  • step S201 the original image is acquired.
  • the original image is an image that requires face recognition.
  • Embodiments of the present disclosure need to identify whether a face is included in an original image, and if a face is included in the original image, which region of the original image the face is located in.
  • an image can be read from any storage device as an original image; or an image can be downloaded from the Internet as an original image; an image can also be scanned using a scanner to obtain an original image, and An image taken by the camera is taken as the original image.
  • step S202 pixels of a specified value are added to the edge region of the original image to obtain an image to be processed.
  • the edge region of the original image is the outermost layer of pixels on each of the four sides of the original image.
  • FIG. 3 shows a schematic diagram of an original image in which the face is located in the upper edge region of the original image.
  • face recognition when the face is located in the edge area of the original image, when face recognition is performed, there may be cases where the face is not recognized.
  • a pixel of a specified value is first added to an edge region of the original image.
  • FIG. 4 shows a schematic diagram of an image to be processed, and the image to be processed in FIG. 4 is obtained by adding pixels to the four edge regions of the original image shown in FIG.
  • the area with slashes in Figure 4 represents the added pixels.
  • the specific values of the specified numerical values are not specifically limited in the embodiments of the present disclosure. When it is implemented, it can be set as needed. For example, two pixels, five pixels, ten pixels, and the like may be added to the periphery of each pixel of the outermost layer of pixels.
  • all of the added pixels may have the same pixel value. That is, all pixels added have the same color.
  • the same color may be white, black or other colors, and the like, which is not specifically limited in the embodiment of the present disclosure. Since pixels of the same color have the same pixel value, when the added pixels are pixels of the same color, it can be ensured that when the face to be processed is subjected to face recognition, when the pixels of a certain area of the image to be processed are found to have the same pixel value, Then, it can be determined that it is an increased pixel, so that it is not necessary to perform an excessive recognition process, and thus it is possible to have a relatively high recognition speed.
  • pixels of a specified value may be added to all of the four edge regions of the original image.
  • the number of pixels of a specified value added to each edge region can be different.
  • the number of pixels added is different as the left edge area and the right edge area.
  • the amount of calculation at the time of image recognition is increased.
  • the edge region where the face may exist may be detected first, and the pixel may be added at the edge region where the face may exist.
  • the pixel value of the face color pixel is usually a specific value or is within a certain value range, it is possible to detect whether the pixel value of each pixel of the edge region is the pixel value of the face color pixel. Determine if there is a possible face in the edge area.
  • steps S2021 to S2023 when a pixel of a specified value is added to an edge region of the original image, including but not limited to, by the following steps S2021 to S2023:
  • step S2021 the pixel values of the respective pixels of the edge region of the original image are acquired.
  • the pixel values of the respective pixels of the edge region are acquired, it can be realized by determining the RGB values of the respective pixels.
  • determining the RGB values of the respective pixels including but not limited to being implemented by a color sensor.
  • step S2022 the edge region where the face color pixel is present is determined according to the pixel value of each pixel of the edge region and the preset face skin color pixel value.
  • the pixel value of each pixel of the edge region may be compared with the preset face skin color pixel value, and the edge region of the face skin color pixel is determined according to the comparison result.
  • the embodiment of the present disclosure is not limited. However, in order to ensure that the pixel is accurately recognized as a face skin color pixel, the first preset threshold may be set to be relatively small.
  • the number of face skin color pixels may be the total number of pixels corresponding to the edge region according to all the pixels corresponding to the edge region.
  • the ratio depends on. When the ratio is greater than the second preset threshold, it is determined that the edge region has a face color pixel; otherwise, it is determined that the edge region does not have a face skin pixel.
  • the specific value of the second preset threshold can be set as needed.
  • step S2023 pixels of a specified numerical value are added to the edge region where the face color pixel is present.
  • the pixel of the specified value when the pixel is added to the edge region of the original image, the pixel of the specified value may be added only in the edge region where the face color pixel is present. For example, when the face skin color pixel exists in the upper edge region of the original image, the pixel of the specified value may be added only in the upper edge region to achieve a reduction in the amount of calculation of the image recognition.
  • step S203 a plurality of sub-images of the image to be processed are acquired, and whether each sub-image is a human face sub-image is determined using a pre-trained adaptive enhanced face classifier.
  • This step is a specific implementation of face recognition for the image to be processed.
  • the embodiment of the present disclosure first acquires a plurality of sub-images of a to-be-processed image, and implements by determining whether each sub-image is a human face sub-image.
  • the first way using the sliding window to traverse the image to be processed multiple times, the image area covered by the sliding window at each position is used as a sub-image of the image to be processed each time traversing, wherein the image to be processed is traversed twice or twice The size of the sliding window used is different.
  • embodiments of the present disclosure will traverse the image to be processed separately using sliding windows of different sizes.
  • sliding windows There are many types of sliding windows that can be used each time the image is to be processed. For example, when traversing the image to be processed a certain time, the size of the sliding window is 3*3; the next time the image is to be processed, the sliding The size of the window is 5*5 and so on.
  • FIG. 5 shows a schematic diagram of traversing a to-be-processed image using sliding windows of different sizes. Each thick solid square in Figure 5 is a sliding window.
  • the sliding window When traversing the image to be processed using a sliding window of any size, the sliding window will traverse in the horizontal direction (X direction) and the vertical direction (Y direction) of the image to be processed according to the specified step size, each in the X direction or Y
  • the direction moves by one step and moves to a new position of the image to be processed, and each position defines an image range, and the image range defined by each position is a sub-image of the image to be processed.
  • the specified step size can be one pixel, two pixels, and the like.
  • the image to be processed is multi-scaled to obtain a plurality of sizes of the image to be processed; for any size of the image to be processed, the image to be processed of the size is cropped into image regions of a plurality of specified sizes, each of which will be Image area as A sub image.
  • a size of the image to be processed is obtained.
  • the image to be processed of the size may be cropped into a plurality of image regions, each image region having a size of a specified size.
  • the specified size is 3*3 (pixel * pixel), 5*5, and the like.
  • FIG. 6 a schematic diagram of a plurality of sizes of images to be processed is shown.
  • (a) to (c) of Fig. 6 respectively show images of a size to be processed.
  • (c) of FIG. 6 it shows a schematic diagram of cropping the image to be processed of the size.
  • the rectangular frame surrounded by each thick solid line is a sub-image of the image to be processed of the size.
  • each sub-image when determining whether each sub-image is a face sub-image, it can be implemented by a pre-trained face classifier.
  • the pre-trained face classifier may be a support vector machine face classifier, a neural network face classifier, or an adaptive boost (Adaboost) face classifier or the like.
  • Adaboost adaptive boost
  • the pre-trained face classifier is used as an example of the Adaboost face classifier.
  • the pre-trained Adaboost face classifier in the embodiment of the present disclosure is cascaded by a multi-level classifier.
  • each classifier of the Adaboost face classifier is used to determine whether a sub-image is a face sub-image.
  • the output of any classifier is "1" and "0".
  • the output result is “1” to identify that the classifier determines that the sub-image is a face sub-image; the output result is “0” to identify that the classifier determines that the sub-image is not a face sub-image.
  • Each classifier of the Adaboost face classifier is a strong classifier, and each strong classifier includes a plurality of weak classifiers.
  • each classifier of the Adaboost face classifier it is implemented by training a plurality of weak classifiers included in the class classifier, and the output result of the class classifier is processed according to the data processing of all the weak classifiers included in the classifier. Decide.
  • the manner of training the Adaboost face classifier and the manner of determining the output result of each classifier refer to the content of the existing Adaboost face classifier, which is not explained in detail in the embodiment of the present disclosure.
  • the embodiment of the present disclosure is not specifically limited.
  • the Adaboost face classifier may include more levels of classifiers, such as a 5-level classifier, an 8-level classifier, and the like.
  • FIG. 7 which shows a schematic diagram of an Adaboost face classifier, each circular area in FIG. 7 represents a primary classifier.
  • the pre-trained adaptive enhanced face classifier when using the pre-trained adaptive enhanced face classifier to determine whether each sub-image is a face sub-image, for any sub-image, starting from the first-level classifier of the pre-trained Adaboost face classifier, It is judged step by step whether the sub-image is a face sub-image until the last-level classifier of the Adaboost face classifier. When all the fractions When the output result of the classifier identifies that the sub-image is a face sub-image, the sub-image is determined to be a human face sub-image. When the output result of any classifier identifies that the sub-image is a non-human face sub-image, it is determined that the sub-image is a non-human face sub-image.
  • the sub-image is input from the first-level classifier to the Adaboost face classifier, and when the first-level classifier determines that the sub-image is a face sub-image, the sub-image is input to the second level. a classifier, wherein the second level classifier determines whether the sub-image is a face sub-image, and so on, until the final classifier; when the first-level classifier determines that the sub-image is not a face sub-image, the first-level classifier obtains The next sub-image is identified and the next sub-image is identified.
  • the face when the face is located in the edge region of the original image, it usually cannot recognize the face located in the edge region. However, when the face is occluded, it can correctly recognize the occluded face.
  • the face located in the edge region is equivalent to being occluded by the added pixel, and therefore, the face region can be identified by the pre-trained face classifier. The face of the face can thus improve the accuracy of recognizing the face of the edge area.
  • step S204 the face in the original image is determined based on the result of the face recognition.
  • the sub-images belonging to the human face can be determined according to the result of the face recognition.
  • the sub-image determined as the face sub-image is original
  • the position in the image, and the face in the original image is determined based on the position of the sub-image of the face sub-image in the original image.
  • the pixel values of each pixel of the sub-image may be extracted, and each of the original images is extracted.
  • the pixel values of the pixels are further compared with the pixel values of the respective pixels in the original image.
  • the position of the sub-image in the original image can be located.
  • the face in the original image can be determined.
  • the method provided by the embodiment of the present disclosure performs face recognition on the image to be processed by adding pixels of a specified value to the edge region of the original image to obtain a face to be processed to determine a face in the original image. Since a certain number of pixels are added to the edge region of the original image during face recognition, by adding this portion of the pixel, it is equivalent to the original image. Extending to ensure that when the face is located in the edge area of the original image, the sub-image including the face can be found in the expanded image, thereby ensuring that the face located in the edge area of the original image can be recognized, thereby improving the person The accuracy of face recognition.
  • FIG. 8 is a block diagram of a face recognition device, according to an exemplary embodiment.
  • the face recognition device includes an acquisition module 801, an addition module 802, an identification module 803, and a determination module 804. among them:
  • the obtaining module 801 is configured to acquire an original image
  • the adding module 802 is configured to add a pixel of a specified value to an edge region of the original image acquired by the obtaining module 801 to obtain an image to be processed;
  • the identification module 803 is configured to perform face recognition on the to-be-processed image obtained by the adding module 802;
  • the determining module 804 is configured to acquire a face in the original image according to the recognition result of the recognition module 803.
  • the apparatus provided by the embodiment of the present disclosure performs face recognition on the image to be processed by adding pixels of a specified value to the edge region of the original image to obtain a face to be processed, to determine a face in the original image. Since a certain number of pixels are added to the edge region of the original image during face recognition, by adding this portion of the pixel, the original image is expanded to ensure that when the face is located at the edge region of the original image, the extension is performed.
  • the sub-image including the human face can be found in the latter image, thereby ensuring that the face located in the edge region of the original image can be recognized, thereby improving the accuracy of the face recognition.
  • the adding module 802 includes a first acquiring unit 8021, a first determining unit 8022, and an adding unit 8023. among them:
  • the first obtaining unit 8021 is configured to acquire pixel values of respective pixels of an edge region of the original image
  • the first determining unit 8022 is configured to determine, according to the pixel value of each pixel of the edge region and the preset face skin color pixel value acquired by the first acquiring unit, that the edge region of the face color pixel is present;
  • the adding unit 8023 is configured to increase a pixel of a specified value in an edge region of the presence face color pixel determined by the first determining unit 8022.
  • the identification module 803 includes:
  • the second obtaining unit 8031 is configured to acquire a plurality of sub-images of the image to be processed
  • the determining unit 8032 is configured to determine whether each of the sub-images acquired by the second acquiring unit is a human face sub-image using the pre-trained adaptive enhanced face classifier.
  • the second obtaining unit 8031 is configured to traverse the image to be processed multiple times using a sliding window, and the image area covered by the sliding window at each position is used as a child of the image to be processed each time traversing An image in which the size of the sliding window used to traverse the image to be processed twice is different; or,
  • the image to be processed is multi-scaled to obtain a plurality of sizes of images to be processed; for any size of the image to be processed, the image to be processed of the size is cropped into image regions of a plurality of specified sizes, and each image region is taken as a sub-image image.
  • the pre-trained adaptive enhanced face classifier is cascaded by a multi-level classifier configured to pre-train adaptive adaptive face classification for any sub-image
  • the first level classifier of the device starts to determine whether the sub-image is a face sub-image until the last-level classifier of the adaptive enhanced face classifier; when the output results of all the classifiers identify the sub-image as a face sub-image , determining that the sub-image is a human face sub-image.
  • the determining module 804 includes a second determining unit 8041 and a third determining unit 8042. among them:
  • the second determining unit 8041 is configured to determine, as a sub-image of the face sub-image among the plurality of sub-images of the image to be processed, a position of the sub-image of the face sub-image in the original image;
  • the third determining unit 8042 is configured to determine a face in the original image based on the position of the sub-image of the face sub-image in the original image.
  • FIG. 12 is a block diagram of a terminal 1200, which may be used to perform the face recognition method provided by the embodiment corresponding to FIG. 1 or FIG. 2, according to an exemplary embodiment.
  • the terminal 1200 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • the terminal 1200 may include one or more of the following components: a processing component 1202, a memory 1204, a power component 1206, a multimedia component 1208, an audio component 1210, an I/O (Input/Output) interface 1212, and a sensor. Component 1214, and communication component 1216.
  • Processing component 1202 typically controls the overall operations of terminal 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 1202 can include one or more processors 1220 to execute instructions to perform all or part of the steps described above.
  • processing component 1202 can include one or more modules to facilitate interaction between component 1202 and other components.
  • processing component 1202 can include a multimedia module to facilitate interaction between multimedia component 1208 and processing component 1202.
  • the memory 1204 is configured to store various types of data to support operation at the terminal 1200. Examples of such data include instructions for any application or method operating on terminal 1200, contact data, phone book data, messages, pictures, videos, and the like.
  • Memory 1204 can be of any type of volatile or non-volatile storage device or their Combined implementation, such as SRAM (Static Random Access Memory), EEPROM (Electrically-Erasable Programmable Read-Only Memory), EPROM (Erasable Programmable Read Only Memory, Erasable) In addition to programmable read only memory, PROM (Programmable Read-Only Memory), ROM (Read-Only Memory), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM Static Random Access Memory
  • EEPROM Electrically-Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory, Erasable
  • PROM Programmable Read-Only Memory
  • ROM Read-Only Memory
  • magnetic memory flash memory
  • flash memory magnetic disk or
  • Power component 1206 provides power to various components of terminal 1200.
  • Power component 1206 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal 1200.
  • the multimedia component 1208 includes a screen between the terminal 1200 and the user that provides an output interface.
  • the screen may include an LCD (Liquid Crystal Display) and a TP (Touch Panel). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor can sense not only the boundaries of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1208 includes a front camera and/or a rear camera. When the terminal 1200 is in an operation mode such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1210 is configured to output and/or input an audio signal.
  • the audio component 1210 includes a MIC (Microphone) that is configured to receive an external audio signal when the terminal 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 1204 or transmitted via communication component 1216.
  • audio component 1210 also includes a speaker for outputting an audio signal.
  • the I/O interface 1212 provides an interface between the processing component 1202 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1214 includes one or more sensors for providing terminal 1200 with a status assessment of various aspects.
  • the sensor component 1214 can detect the open/closed state of the terminal 1200, the relative positioning of the components, such as the display and the keypad of the terminal 1200, and the sensor component 1214 can also detect the location change of a component of the terminal 1200 or the terminal 1200, the user The presence or absence of contact with the terminal 1200, the orientation or acceleration/deceleration of the terminal 1200 and the temperature change of the terminal 1200.
  • Sensor assembly 1214 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor component 1214 can also include a light sensor, such as CMOS (Complementary Metal Oxide Semiconductor, Complementary Metal Oxide) or CCD (Charge-coupled Device) image sensor for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor, Complementary Metal Oxide
  • CCD Charge-coupled Device
  • the sensor assembly 1214 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 1216 is configured to facilitate wired or wireless communication between terminal 1200 and other devices.
  • the terminal 1200 can access a wireless network based on a communication standard such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 1216 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 1216 further includes an NFC (Near Field Communication) module to facilitate short-range communication.
  • the NFC module can be based on RFID (Radio Frequency Identification) technology, IrDA (Infra-red Data Association) technology, UWB (Ultra Wideband) technology, BT (Bluetooth) technology and Other technologies are implemented.
  • the terminal 1200 may be configured by one or more ASICs (Application Specific Integrated Circuits), DSP (Digital Signal Processor), DSPD (Digital Signal Processor Device). Device), PLD (Programmable Logic Device), FPGA (Field Programmable Gate Array), controller, microcontroller, microprocessor or other electronic components are implemented to perform the above diagram 1 or the face recognition method provided by the embodiment corresponding to FIG. 2.
  • ASICs Application Specific Integrated Circuits
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processor Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components are implemented to perform the above diagram 1 or the face recognition method provided by the embodiment corresponding to FIG. 2.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 1204 including instructions executable by processor 1220 of terminal 1200 to perform the above-described face recognition method.
  • the non-transitory computer readable storage medium may be a ROM, a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, and optical data. Storage devices, etc.

Abstract

一种人脸识别方法、装置及终端,属于图像处理技术领域。方法包括:获取原始图像(S101);在原始图像的边缘区域增加指定数值的像素,得到待处理图像(S102);对待处理图像进行人脸识别(S103);根据人脸识别的结果确定原始图像中的人脸(S104)。通过在原始图像的边缘区域增加指定数值的像素,得到待处理图像后,对待处理图像进行人脸识别,以确定原始图像中的人脸。该方法能够提高人脸识别的准确性。

Description

人脸识别方法、装置及终端
相关申请的交叉引用
本申请基于申请号为201510520457.X、申请日为2015年08月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及图像处理技术领域,特别涉及一种人脸识别方法、装置及终端。
背景技术
近年来,由于人脸识别技术在安全访问控制、视觉监测、基于内容的图像检索和新一代人机界面等领域的应用价值越来越大,因此,如何识别图像中的人脸,受到研究者的普遍重视。
相关技术中,在进行人脸识别时,可以根据人脸特征实现,例如可以通过人的眼睛、鼻子等各器官之间的比例作为一个人脸特征。然而,当人脸位于图像的边缘区域时,由于图像中不包括整个人脸,因此,在图像中可能查找不到近似人脸器官之间比例的图像,导致人脸分类器可能会出现识别不出人脸的情况。
发明内容
本公开提供一种人脸识别方法、装置及终端。
根据本公开实施例的第一方面,提供一种人脸识别方法,所述方法包括:
获取原始图像;
在所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
对所述待处理图像进行人脸识别;
根据所述人脸识别的结果确定所述原始图像中的人脸。
结合第一方面,在第一方面的第一种可能的实现方式中,所述在所述原始图像的边缘区域增加指定数值的像素,包括:
获取所述原始图像的边缘区域各个像素的像素值;
根据所述边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
在所述存在人脸肤色像素的边缘区域增加指定数值的像素。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述对所述待处理图像进行人脸识别,包括:
获取所述待处理图像的多个子图像;
使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像。
结合第一方面的第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述获取所述待处理图像的多个子图像,包括:
使用滑动窗口分多次遍历所述待处理图像,将每次遍历时所述滑动窗口在每个位置所覆盖的图像区域作为所述待处理图像的一个子图像,其中,任两次遍历所述待处理图像使用的滑动窗口的尺寸不同;或者,
对所述待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将所述尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
结合第一方面的第二种可能的实现方式,在第一方面的第四种可能的实现方式中,所述预先训练的自适应增强人脸分类器由多级分类器级联而成,所述使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像,包括:
对于任一子图像,从所述预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断所述子图像是否为人脸子图像,直至所述自适应增强人脸分类器的最后一级分类器;
当所有级分类器的输出结果均标识所述子图像为人脸子图像时,确定所述子图像为人脸子图像。
结合第一方面的第二种可能的实现方式,在第一方面的第五种可能的实现方式中,所述根据所述人脸识别的结果确定所述原始图像中的人脸,包括:
在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定所述为人脸子图像的子图像在所述原始图像中的位置;
根据所述为人脸子图像的子图像在所述原始图像中的位置,确定所述原始图像中的人脸。
根据本公开实施例的第二方面,提供一种人脸识别装置,所述装置包括:
获取模块,用于获取原始图像;
增加模块,用于在所述获取模块获取的所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
识别模块,用于对所述增加模块得到的所述待处理图像进行人脸识别;
确定模块,用于根据所述识别模块的所述人脸识别的结果确定所述原始图像中的人脸。
结合第二方面,在第二方面的第一种可能的实现方式中,所述增加模块包括:
第一获取单元,用于获取所述原始图像的边缘区域各个像素的像素值;
第一确定单元,用于根据所述第一获取单元获取的所述边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
增加单元,用于在所述第一确定单元确定的所述存在人脸肤色像素的边缘区域增加指定数值的像素。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述识别模块包括:
第二获取单元,用于获取所述待处理图像的多个子图像;
判断单元,用于使用预先训练的自适应增强人脸分类器判断所述第二获取单元获取的每个子图像是否为人脸子图像。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,所述第二获取单元,用于使用滑动窗口分多次遍历所述待处理图像,将每次遍历时所述滑动窗口在每个位置所覆盖的图像区域作为所述待处理图像的一个子图像,其中,任两次遍历所述待处理图像使用的滑动窗口的尺寸不同;或者,
对所述待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将所述尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
结合第二方面的第二种可能的实现方式,在第二方面的第四种可能的实现方式中,所述预先训练的自适应增强人脸分类器由多级分类器级联而成,所述判断单元,用于对于任一子图像,从所述预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断所述子图像是否为人脸子图像,直至所述自适应增强人脸分类器的最后一级分类器;当所有级分类器的输出结果均标识所述子图像为人脸子图像时,确定所述子图像为人脸子图像。
结合第二方面的第二种可能的实现方式,在第二方面的第五种可能的实现方式中,所述确定模块包括:
第二确定单元,用于在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定所述为人脸子图像的子图像在所述原始图像中的位置;
第三确定单元,用于根据所述为人脸子图像的子图像在所述原始图像中的位置,确定所 述原始图像中的人脸。
根据本公开实施例的第三方面,提供一种终端,所述终端包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取原始图像;
在所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
对所述待处理图像进行人脸识别;
根据所述人脸识别的结果确定所述原始图像中的人脸。
本公开的实施例提供的技术方案可以包括以下有益效果:
通过在原始图像的边缘区域增加指定数值的像素,得到待处理图像后,对待处理图像进行人脸识别,以确定原始图像中的人脸。由于在进行人脸识别时,在原始图像的边缘区域增加了一定数量的像素,通过增加这部分像素,相当于对原始图像进行了扩展,确保当人脸位于原始图像的边缘区域时,在扩展后的图像中能够查找到包括人脸的子图像,从而确保可以识别出位于原始图像边缘区域的人脸,进而能够提高人脸识别的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种人脸识别方法的流程图。
图2是根据一示例性实施例示出的一种人脸识别方法的流程图。
图3是根据一示例性实施例示出的一种原始图像的示意图。
图4是根据一示例性实施例示出的一种待处理图像的示意图。
图5是根据一示例性实施例示出的一种使用不同尺寸的滑动窗口遍历待处理图像的示意图。
图6是根据一示例性实施例示出的一种多种尺寸的待处理图像的示意图。
图7是根据一示例性实施例示出的一种Adaboost人脸分类器的示意图。
图8是根据一示例性实施例示出的一种人脸识别装置的框图。
图9是根据一示例性实施例示出的一种增加模块的框图。
图10是根据一示例性实施例示出的一种识别模块的框图。
图11是根据一示例性实施例示出的一种确定模块的框图。
图12是根据一示例性实施例示出的一种终端的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种人脸识别方法的流程图,人脸识别方法用于终端中。如图1所示,本公开实施例提供的人脸识别方法包括以下步骤。
在步骤S101中,获取原始图像。
在步骤S102中,在原始图像的边缘区域增加指定数值的像素,得到待处理图像。
在步骤S103中,对待处理图像进行人脸识别。
在步骤S104中,根据人脸识别的结果确定原始图像中的人脸。
本公开实施例提供的方法,通过在原始图像的边缘区域增加指定数值的像素,得到待处理图像后,对待处理图像进行人脸识别,以确定原始图像中的人脸。由于在进行人脸识别时,在原始图像的边缘区域增加了一定数量的像素,通过增加这部分像素,相当于对原始图像进行了扩展,确保当人脸位于原始图像的边缘区域时,在扩展后的图像中能够查找到包括人脸的子图像,从而确保可以识别出位于原始图像边缘区域的人脸,进而能够提高人脸识别的准确性。
在另一个实施例中,在原始图像的边缘区域增加指定数值的像素,包括:
获取原始图像的边缘区域各个像素的像素值;
根据边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
在存在人脸肤色像素的边缘区域增加指定数值的像素。
在另一个实施例中,对待处理图像进行人脸识别,包括:
获取待处理图像的多个子图像;
使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像。
在另一个实施例中,获取待处理图像的多个子图像,包括:
使用滑动窗口分多次遍历待处理图像,将每次遍历时滑动窗口在每个位置所覆盖的图像区域作为待处理图像的一个子图像,其中,任两次遍历待处理图像使用的滑动窗口的尺寸不同;或者,
对待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
在另一个实施例中,预先训练的自适应增强人脸分类器由多级分类器级联而成,使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像,包括:
对于任一子图像,从预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断子图像是否为人脸子图像,直至自适应增强人脸分类器的最后一级分类器;
当所有级分类器的输出结果均标识子图像为人脸子图像时,确定子图像为人脸子图像。
在另一个实施例中,根据人脸识别的结果确定原始图像中的人脸,包括:
在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定为人脸子图像的子图像在原始图像中的位置;
根据为人脸子图像的子图像在原始图像中的位置,确定原始图像中的人脸。
上述所有可选技术方案,可以采用任意结合形成本发明的可选实施例,在此不再一一赘述。
图2是根据一示例性实施例示出的一种人脸识别方法的流程图,人脸识别方法用于终端中。如图2所示,本公开实施例提供的人脸识别方法包括以下步骤。
在步骤S201中,获取原始图像。
原始图像为需要进行人脸识别的图像。本公开实施例需要识别原始图像中是否包括人脸,以及如果原始图像中包括人脸,则人脸位于原始图像中的哪个区域。
获取原始图像的方式可以有很多种。例如,可以从任一存储设备读取一张图像作为原始图像;或者,可以从互联网上下载一张图像作为原始图像;还可以利用扫描仪扫描一张图像而得到原始图像,另外,还可以将照相机拍摄的一张图像作为原始图像。
在步骤S202中,在原始图像的边缘区域增加指定数值的像素,得到待处理图像。
示例地,原始图像的边缘区域为原始图像四个边中各边的最外边一层像素。如图3所示,其示出了一种原始图像的示意图,图3中的人脸位于原始图像的上边缘区域。当人脸位于原始图像的边缘区域时,在进行人脸识别时,可能会出现识别不出人脸的情况。为了避免该种情况发生,本公开实施例在进行人脸识别时,先在原始图像的边缘区域增加指定数值的像素, 从而得到待处理图像。如图4所示,其示出了一种待处理图像的示意图,图4中的待处理图像为在图3所示的原始图像的4个边缘区域增加像素后得到的。图4中带有斜线的区域即表示增加的像素。
关于指定数值的具体数值,本公开实施例不作具体限定。具体实施时,可以根据需要设定。例如,可以在最外边一层像素的每个像素外围再增加两个像素、五个像素、十个像素等。
另外,在原始图像的边缘区域增加像素时,增加的所有像素均可以具有同一像素值。也就是说,增加的所有像素具有相同的颜色。该相同的颜色可以为白色,也可以为黑色或其它颜色等,本公开实施例对此不作具体限定。由于同一颜色的像素具有相同的像素值,当增加的像素为同一颜色的像素时,可以确保后续在对待处理图像进行人脸识别时,当发现待处理图像某一区域的像素具有相同的像素值,则可以确定其为增加的像素,从而不必进行过多的识别流程,因而可以具有比较高的识别速度。
示例地,在原始图像的边缘区域增加指定数值的像素时,可以在原始图像的四个边缘区域均增加指定数值的像素。当然,每个边缘区域所增加的指定数值的像素的数量可以不同。如左边缘区域和右边缘区域所增加的像素的数量不同。然而,当在原始图像的边缘区域增加像素后,会增加图像识别时的计算量。为了能够最小化图像识别的计算量,在原始图像的边缘区域增加像素时,可以先检测可能存在人脸的边缘区域,并在可能存在人脸的边缘区域增加像素。
在一个可能的实施方式中,由于人脸肤色像素的像素值通常为一个具体数值或者处于一定数值范围内,因此,可以通过检测边缘区域的各个像素的像素值是否为人脸肤色像素的像素值,确定边缘区域是否可能存在人脸。结合该部分内容,在原始图像的边缘区域增加指定数值的像素时,包括但不限于通过如下步骤S2021至步骤S2023来实现:
在步骤S2021中,获取原始图像的边缘区域各个像素的像素值。
示例地,在获取边缘区域各个像素的像素值时,可以通过确定各个像素的RGB值来实现。其中,在确定各个像素的RGB值时,包括但不限于通过颜色传感器来实现。
在步骤S2022中,根据边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域。
在一个可能的实施方式中,可以将边缘区域每个像素的像素值与预设人脸肤色像素值进行比对,并根据比对结果确定存在人脸肤色像素的边缘区域。
示例地,在将边缘区域任一像素的像素值与预设人脸肤色像素值进行比对时,如果该像素的像素值与预设人脸肤色像素值之间的差值不大于第一预设阈值,则可以确定该像素为人 脸肤色像素。关于该第一预设阈值的具体数值,本公开实施例不作具限定。然而,为了保证能够准确识别该像素是否为人脸肤色像素,该第一预设阈值可以设置得比较小。
示例地,在根据比对结果确定存在人脸肤色像素的边缘区域时,对于任一边缘区域,可以根据该边缘区域对应的所有像素中,人脸肤色像素数量占该边缘区域对应的所有像素数量的比例而定。当该比例大于第二预设阈值时,确定该边缘区域存在人脸肤色像素;否则,确定该边缘区域不存在人脸肤色像素。关于第二预设阈值的具体数值,可以根据需要设定。
在步骤S2023中,在存在人脸肤色像素的边缘区域增加指定数值的像素。
也就是说,本公开实施例在原始图像的边缘区域增加像素时,可以仅在存在人脸肤色像素的边缘区域增加指定数值的像素。例如,当人脸肤色像素存在于原始图像的上边缘区域时,可以仅在上边缘区域增加指定数值的像素,以实现减少图像识别的计算量。
在步骤S203中,获取待处理图像的多个子图像,并使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像。
该步骤为对待处理图像进行人脸识别的具体实现方式。在进行人脸识别时,本公开实施例先获取待处理图的多个子图像,并通过判断每个子图像是否为人脸子图像来实现。
其中,在获取待处理图像的多个子图像时,包括但不限于有如下两种方式:
第一种方式:使用滑动窗口分多次遍历待处理图像,将每次遍历时滑动窗口在每个位置所覆盖的图像区域作为待处理图像的一个子图像,其中,任两次遍历待处理图像使用的滑动窗口的尺寸不同。
也就是说,本公开实施例会使用不同尺寸的滑动窗口分别遍历待处理图像。关于每次遍历待处理图像时所使用的滑动窗口的尺寸,可以有很多种,例如,在某次遍历待处理图像时,滑动窗口的尺寸为3*3;在下一次遍历待处理图像时,滑动窗口的尺寸为5*5等。如图5所示,其示出了一种使用不同尺寸的滑动窗口遍历待处理图像的示意图。图5中的每个粗实线正方形即为一个滑动窗口。
在使用任一尺寸的滑动窗口遍历待处理图像时,滑动窗口将按照指定步长在待处理图像的水平方向(X方向)和垂直方向(Y方向)上遍历,滑动窗口每在X方向或Y方向移动一个步长,便移动至待处理图像的一个新的位置,而每个位置均限定了一个图像范围,每个位置所限定的图像范围即为待处理图像的一个子图像。其中,指定步长可以为一个像素、两个像素等。
第二种方式:对待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为 一个子图像。
其中,每次缩放待处理图像,即可得到一个尺寸的待处理图像。针对一个尺寸的待处理图像,可以将该尺寸的待处理图像裁剪成多个图像区域,每个图像区域的尺寸为指定尺寸。如,该指定尺寸为3*3(像素*像素)、5*5等。
如图6所示,其示出了一种多种尺寸的待处理图像的示意图。图6中的(a)图至(c)图分别表示一个尺寸的待处理图像。如图6中的(c)图所示,其示出了一种裁剪该尺寸的待处理图像的示意图。图6中的(c)图中,每个粗实线所包围的矩形框即为该尺寸的待处理图像的一个子图像。
进一步地,在判断每个子图像是否为人脸子图像时,可以通过预先训练的人脸分类器来实现。关于预先训练的人脸分类器的类型,可以有很多种。例如,预先训练的人脸分类器可以为支持向量机人脸分类器、神经网络人脸分类器或者是自适应增强(Adaboost)人脸分类器等。为了便于说明,本公开实施例在后续进行人脸识别时,以预先训练的人脸分类器为Adaboost人脸分类器为例进行说明。
在一个可能的实施方式中,为了增加人脸识别的精度,本公开实施例中预先训练的Adaboost人脸分类器由多级分类器级联而成。其中,Adaboost人脸分类器的每级分类器均用于判断某一个子图像是否为人脸子图像。任一级分类器的输出结果为“1”和“0”。输出结果为“1”标识该级分类器确定子图像为人脸子图像;输出结果为“0”标识该级分类器确定该子图像不为人脸子图像。
Adaboost人脸分类器的每级分类器均为强分类器,每个强分类器又包括多个弱分类器。在训练Adaboost人脸分类器的每级分类器时,通过训练该级分类器所包括的多个弱分类器来实现,该级分类器的输出结果根据其包括的所有弱分类器的数据处理情况决定。关于训练Adaboost人脸分类器的方式,以及每级分类器的输出结果的确定方式,可以参见已有Adaboost人脸分类器的内容,本公开实施例对该部分内容不作详细解释。
关于预先训练的Adaboost人脸分类器所包含的分类器的级数,本公开实施例不作具体限定。为了使得识别结果比较准确,Adaboost人脸分类器可以包括较多级的分类器,如包括5级分类器、8级分类器等。如图7所示,其示出了一种Adaboost人脸分类器的示意图,图7中每个圆形区域表示一级分类器。
在此基础上,在使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像时,对于任一子图像,从预先训练的Adaboost人脸分类器的第一级分类器开始,逐级判断该子图像是否为人脸子图像,直至Adaboost人脸分类器的最后一级分类器。当所有级分 类器的输出结果均标识该子图像为人脸子图像时,确定该子图像为人脸子图像。当任一级分类器的输出结果标识该子图像为非人脸子图像时,确定该子图像为非人脸子图像。
具体地,对于任一子图像,将该子图像均从第一级分类器输入Adaboost人脸分类器,当第一级分类器确定该子图像为人脸子图像时,将该子图像输入第二级分类器,由第二级分类器判断该子图像是否为人脸子图像,如此类推,直至最后一级分类器;当第一级分类器确定该子图像不为人脸子图像时,第一级分类器获取下一个子图像,并对下一个子图像进行识别。
需要说明的是,基于预先训练的人脸分类器的工作原理,当人脸位于原始图像的边缘区域时,其通常不能识别出该位于边缘区域的人脸。然而,当人脸被遮挡时,其可以正确识别出被遮挡的人脸。本公开实施例通过在原始图像的边缘区域增加指定数值的像素,使得位于边缘区域的人脸相当于被该增加的像素遮挡,因此,通过预先训练的人脸分类器可以识别出该位于边缘区域的人脸,从而可以提高识别边缘区域人脸的准确性。
在步骤S204中,根据人脸识别的结果确定原始图像中的人脸。
当识别完所有的子图像是否为人脸子图像后,即可以根据人脸识别的结果确定属于人脸的子图像。然而,为了确定原始图像中的人脸在哪个区域,需要进一步根据属于人脸的子图像,确定原始图像中的人脸。
示例地,在根据人脸识别结果确定原始图像中的人脸时,包括但不限于:在待处理图像的多个子图像中存在为人脸子图像的子图像时,确定为人脸子图像的子图像在原始图像中的位置,并根据为人脸子图像的子图像在原始图像中的位置,确定原始图像中的人脸。
在一个可能的实施方式中,对于任一识别结果为人脸子图像的子图像,在获取该子图像在原始图像中的位置时,可以提取该子图像各个像素的像素值,并提取原始图像中每个像素的像素值,进而将该子图像各个像素的像素值分别与原始图像中的各个像素值进行比对。当原始图像中某一区域各个像素的像素值均与该子图像各个像素的像素值相同时,即可定位到该子图像在原始图像中的位置。当定位到该子图像在原始图像中的位置后,即可确定原始图像中的人脸。
进一步地,在待处理图像的多个子图像中不存在为人脸子图像的子图像时,确定原始图像中不包括人脸。
本公开实施例提供的方法,通过在原始图像的边缘区域增加指定数值的像素,得到待处理图像后,对待处理图像进行人脸识别,以确定原始图像中的人脸。由于在进行人脸识别时,在原始图像的边缘区域增加了一定数量的像素,通过增加这部分像素,相当于对原始图像进 行了扩展,确保当人脸位于原始图像的边缘区域时,在扩展后的图像中能够查找到包括人脸的子图像,从而确保可以识别出位于原始图像边缘区域的人脸,进而能够提高人脸识别的准确性。
图8是根据一示例性实施例示出的一种人脸识别装置的框图。参照图8,该人脸识别装置包括获取模块801、增加模块802、识别模块803和确定模块804。其中:
该获取模块801被配置为获取原始图像;
该增加模块802被配置为在获取模块801获取的原始图像的边缘区域增加指定数值的像素,得到待处理图像;
该识别模块803被配置为对增加模块802得到的待处理图像进行人脸识别;
该确定模块804被配置为根据识别模块803的识别结果获取原始图像中的人脸。
本公开实施例提供的装置,通过在原始图像的边缘区域增加指定数值的像素,得到待处理图像后,对待处理图像进行人脸识别,以确定原始图像中的人脸。由于在进行人脸识别时,在原始图像的边缘区域增加了一定数量的像素,通过增加这部分像素,相当于对原始图像进行了扩展,确保当人脸位于原始图像的边缘区域时,在扩展后的图像中能够查找到包括人脸的子图像,从而确保可以识别出位于原始图像边缘区域的人脸,进而能够提高人脸识别的准确性。
在另一个实施例中,参见图9,增加模块802包括第一获取单元8021、第一确定单元8022和增加单元8023。其中:
该第一获取单元8021被配置为获取原始图像的边缘区域各个像素的像素值;
该第一确定单元8022被配置为根据第一获取单元获取的边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
该增加单元8023被配置为在第一确定单元8022确定的存在人脸肤色像素的边缘区域增加指定数值的像素。
在另一个实施例中,参见图10,识别模块803包括:
该第二获取单元8031被配置为获取待处理图像的多个子图像;
该判断单元8032被配置为使用预先训练的自适应增强人脸分类器判断第二获取单元获取的每个子图像是否为人脸子图像。
在另一个实施例中,该第二获取单元8031被配置为使用滑动窗口分多次遍历待处理图像,将每次遍历时滑动窗口在每个位置所覆盖的图像区域作为待处理图像的一个子图像,其中,任两次遍历待处理图像使用的滑动窗口的尺寸不同;或者,
对待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
在另一个实施例中,预先训练的自适应增强人脸分类器由多级分类器级联而成,该判断单元8032被配置为对于任一子图像,从预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断子图像是否为人脸子图像,直至自适应增强人脸分类器的最后一级分类器;当所有级分类器的输出结果均标识子图像为人脸子图像时,确定子图像为人脸子图像。
在另一个实施例中,参见图11,确定模块804包括第二确定单元8041和第三确定单元8042。其中:
该第二确定单元8041被配置为在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定为人脸子图像的子图像在原始图像中的位置;
该第三确定单元8042被配置为根据为人脸子图像的子图像在原始图像中的位置,确定原始图像中的人脸。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。上述所有可选技术方案,可以采用任意结合形成本发明的可选实施例,在此不再一一赘述。
图12是根据一示例性实施例示出的一种终端1200的框图,该终端可以用于执行上述图1或图2所对应实施例提供的人脸识别方法。例如,终端1200可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图12,终端1200可以包括以下一个或多个组件:处理组件1202,存储器1204,电源组件1206,多媒体组件1208,音频组件1210,I/O(Input/Output,输入/输出)接口1212,传感器组件1214,以及通信组件1216。
处理组件1202通常控制终端1200的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1202可以包括一个或多个处理器1220来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1202可以包括一个或多个模块,便于处理组件1202和其它组件之间的交互。例如,处理组件1202可以包括多媒体模块,以方便多媒体组件1208和处理组件1202之间的交互。
存储器1204被配置为存储各种类型的数据以支持在终端1200的操作。这些数据的示例包括用于在终端1200上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1204可以由任何类型的易失性或非易失性存储设备或者它们的 组合实现,如SRAM(Static Random Access Memory,静态随机存取存储器),EEPROM(Electrically-Erasable Programmable Read-Only Memory,电可擦除可编程只读存储器),EPROM(Erasable Programmable Read Only Memory,可擦除可编程只读存储器),PROM(Programmable Read-Only Memory,可编程只读存储器),ROM(Read-Only Memory,只读存储器),磁存储器,快闪存储器,磁盘或光盘。
电源组件1206为终端1200的各种组件提供电力。电源组件1206可以包括电源管理系统,一个或多个电源,及其他与为终端1200生成、管理和分配电力相关联的组件。
多媒体组件1208包括在所述终端1200和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括LCD(Liquid Crystal Display,液晶显示器)和TP(Touch Panel,触摸面板)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1208包括一个前置摄像头和/或后置摄像头。当终端1200处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1210被配置为输出和/或输入音频信号。例如,音频组件1210包括一个MIC(Microphone,麦克风),当终端1200处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1204或经由通信组件1216发送。在一些实施例中,音频组件1210还包括一个扬声器,用于输出音频信号。
I/O接口1212为处理组件1202和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1214包括一个或多个传感器,用于为终端1200提供各个方面的状态评估。例如,传感器组件1214可以检测到终端1200的打开/关闭状态,组件的相对定位,例如组件为终端1200的显示器和小键盘,传感器组件1214还可以检测终端1200或终端1200一个组件的位置改变,用户与终端1200接触的存在或不存在,终端1200方位或加速/减速和终端1200的温度变化。传感器组件1214可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1214还可以包括光传感器,如CMOS (Complementary Metal Oxide Semiconductor,互补金属氧化物)或CCD(Charge-coupled Device,电荷耦合元件)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1214还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1216被配置为便于终端1200和其他设备之间有线或无线方式的通信。终端1200可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件1216经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1216还包括NFC(Near Field Communication,近场通信)模块,以促进短程通信。例如,在NFC模块可基于RFID(Radio Frequency Identification,射频识别)技术,IrDA(Infra-red Data Association,红外数据协会)技术,UWB(Ultra Wideband,超宽带)技术,BT(Bluetooth,蓝牙)技术和其它技术来实现。
在示例性实施例中,终端1200可以被一个或多个ASIC(Application Specific Integrated Circuit,应用专用集成电路)、DSP(Digital signal Processor,数字信号处理器)、DSPD(Digital signal Processor Device,数字信号处理设备)、PLD(Programmable Logic Device,可编程逻辑器件)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述图1或图2所对应实施例提供的人脸识别方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1204,上述指令可由终端1200的处理器1220执行以完成上述人脸识别方法。例如,所述非临时性计算机可读存储介质可以是ROM、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,光盘只读存储器)、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (13)

  1. 一种人脸识别方法,其特征在于,所述方法包括:
    获取原始图像;
    在所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
    对所述待处理图像进行人脸识别;
    根据所述人脸识别的结果确定所述原始图像中的人脸。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述原始图像的边缘区域增加指定数值的像素,包括:
    获取所述原始图像的边缘区域各个像素的像素值;
    根据所述边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
    在所述存在人脸肤色像素的边缘区域增加指定数值的像素。
  3. 根据权利要求1或2所述的方法,其特征在于,所述对所述待处理图像进行人脸识别,包括:
    获取所述待处理图像的多个子图像;
    使用预先训练的自适应增强人脸分类器判断每个子图像是否为人脸子图像。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述待处理图像的多个子图像,包括:
    使用滑动窗口分多次遍历所述待处理图像,将每次遍历时所述滑动窗口在每个位置所覆盖的图像区域作为所述待处理图像的一个子图像,其中,任两次遍历所述待处理图像使用的滑动窗口的尺寸不同;或者,
    对所述待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将所述尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
  5. 根据权利要求3所述的方法,其特征在于,所述预先训练的自适应增强人脸分类器由多级分类器级联而成,所述使用预先训练的自适应增强人脸分类器判断每个子图像是否为 人脸子图像,包括:
    对于任一子图像,从所述预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断所述子图像是否为人脸子图像,直至所述自适应增强人脸分类器的最后一级分类器;
    当所有级分类器的输出结果均标识所述子图像为人脸子图像时,确定所述子图像为人脸子图像。
  6. 根据权利要求3所述的方法,其特征在于,所述根据所述人脸识别的结果确定所述原始图像中的人脸,包括:
    在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定所述为人脸子图像的子图像在所述原始图像中的位置;
    根据所述为人脸子图像的子图像在所述原始图像中的位置,确定所述原始图像中的人脸。
  7. 一种人脸识别装置,其特征在于,所述装置包括:
    获取模块,用于获取原始图像;
    增加模块,用于在所述获取模块获取的所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
    识别模块,用于对所述增加模块得到的所述待处理图像进行人脸识别;
    确定模块,用于根据所述识别模块的所述人脸识别的结果确定所述原始图像中的人脸。
  8. 根据权利要求7所述的装置,其特征在于,所述增加模块包括:
    第一获取单元,用于获取所述原始图像的边缘区域各个像素的像素值;
    第一确定单元,用于根据所述第一获取单元获取的所述边缘区域各个像素的像素值及预设人脸肤色像素值,确定存在人脸肤色像素的边缘区域;
    增加单元,用于在所述第一确定单元确定的所述存在人脸肤色像素的边缘区域增加指定数值的像素。
  9. 根据权利要求7或8所述的装置,其特征在于,所述识别模块包括:
    第二获取单元,用于获取所述待处理图像的多个子图像;
    判断单元,用于使用预先训练的自适应增强人脸分类器判断所述第二获取单元获取的每 个子图像是否为人脸子图像。
  10. 根据权利要求9所述的装置,其特征在于,,所述第二获取单元,用于使用滑动窗口分多次遍历所述待处理图像,将每次遍历时所述滑动窗口在每个位置所覆盖的图像区域作为所述待处理图像的一个子图像,其中,任两次遍历所述待处理图像使用的滑动窗口的尺寸不同;或者,
    对所述待处理图像进行多次缩放,得到多个尺寸的待处理图像;对于任一尺寸的待处理图像,将所述尺寸的待处理图像裁剪为多个指定尺寸的图像区域,将每个图像区域作为一个子图像。
  11. 根据权利要求9所述的装置,其特征在于,所述预先训练的自适应增强人脸分类器由多级分类器级联而成,所述判断单元,用于对于任一子图像,从所述预先训练的自适应增强人脸分类器的第一级分类器开始,逐级判断所述子图像是否为人脸子图像,直至所述自适应增强人脸分类器的最后一级分类器;当所有级分类器的输出结果均标识所述子图像为人脸子图像时,确定所述子图像为人脸子图像。
  12. 根据权利要求9所述的装置,其特征在于,所述确定模块包括:
    第二确定单元,用于在所述待处理图像的多个子图像中存在为人脸子图像的子图像时,确定所述为人脸子图像的子图像在所述原始图像中的位置;
    第三确定单元,用于根据所述为人脸子图像的子图像在所述原始图像中的位置,确定所述原始图像中的人脸。
  13. 一种终端,其特征在于,所述终端包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取原始图像;
    在所述原始图像的边缘区域增加指定数值的像素,得到待处理图像;
    对所述待处理图像进行人脸识别;
    根据所述人脸识别的结果确定所述原始图像中的人脸。
PCT/CN2015/099696 2015-08-21 2015-12-30 人脸识别方法、装置及终端 WO2017031901A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
RU2017102521A RU2664688C2 (ru) 2015-08-21 2015-12-30 Способ распознавания человеческих лиц, устройство и терминал
KR1020167015669A KR20170033805A (ko) 2015-08-21 2015-12-30 사람 얼굴 인식 방법, 장치 및 단말
MX2017008481A MX2017008481A (es) 2015-08-21 2015-12-30 Metodo, aparato y terminal para reconocimiento de la cara humana.
JP2016567408A JP6374986B2 (ja) 2015-08-21 2015-12-30 顔認識方法、装置及び端末

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510520457.XA CN105095881B (zh) 2015-08-21 2015-08-21 人脸识别方法、装置及终端
CN201510520457.X 2015-08-21

Publications (1)

Publication Number Publication Date
WO2017031901A1 true WO2017031901A1 (zh) 2017-03-02

Family

ID=54576269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099696 WO2017031901A1 (zh) 2015-08-21 2015-12-30 人脸识别方法、装置及终端

Country Status (8)

Country Link
US (1) US10007841B2 (zh)
EP (1) EP3133527A1 (zh)
JP (1) JP6374986B2 (zh)
KR (1) KR20170033805A (zh)
CN (1) CN105095881B (zh)
MX (1) MX2017008481A (zh)
RU (1) RU2664688C2 (zh)
WO (1) WO2017031901A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286509B1 (en) * 2012-10-19 2016-03-15 Google Inc. Image optimization during facial recognition
CN105095881B (zh) * 2015-08-21 2023-04-07 小米科技有限责任公司 人脸识别方法、装置及终端
EP3136289A1 (en) * 2015-08-28 2017-03-01 Thomson Licensing Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium
CN106485567B (zh) * 2016-09-14 2021-11-30 北京小米移动软件有限公司 物品推荐方法及装置
CN106372616B (zh) * 2016-09-18 2019-08-30 Oppo广东移动通信有限公司 人脸识别方法、装置及终端设备
CN106446884A (zh) * 2016-09-19 2017-02-22 广东小天才科技有限公司 一种图像的快速截取的方法和装置
US10474882B2 (en) * 2017-03-15 2019-11-12 Nec Corporation Video surveillance system based on larger pose face frontalization
CN108280420A (zh) * 2018-01-19 2018-07-13 百度在线网络技术(北京)有限公司 用于处理图像的系统、方法和装置
CN109492550B (zh) * 2018-10-25 2023-06-06 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的相关系统
US10650564B1 (en) * 2019-04-21 2020-05-12 XRSpace CO., LTD. Method of generating 3D facial model for an avatar and related device
CN110401835B (zh) * 2019-06-05 2021-07-02 西安万像电子科技有限公司 图像处理方法及装置
CN110248107A (zh) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 图像处理方法和装置
CN110533002B (zh) * 2019-09-06 2022-04-12 厦门久凌创新科技有限公司 基于人脸识别的大数据处理方法
JP2022522551A (ja) * 2020-02-03 2022-04-20 ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド 画像処理方法及び装置、電子機器並びに記憶媒体
CN112132030A (zh) * 2020-09-23 2020-12-25 湖南快乐阳光互动娱乐传媒有限公司 视频处理方法及装置、存储介质及电子设备
CN112966136B (zh) * 2021-05-18 2021-09-07 武汉中科通达高新技术股份有限公司 一种人脸分类方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274573A1 (en) * 2006-05-26 2007-11-29 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN101488181A (zh) * 2008-01-15 2009-07-22 华晶科技股份有限公司 多方向的人脸检测方法
CN102096802A (zh) * 2009-12-11 2011-06-15 华为技术有限公司 人脸检测方法及装置
CN102270308A (zh) * 2011-07-21 2011-12-07 武汉大学 一种基于五官相关aam模型的面部特征定位方法
CN105095881A (zh) * 2015-08-21 2015-11-25 小米科技有限责任公司 人脸识别方法、装置及终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1628465A1 (en) 2004-08-16 2006-02-22 Canon Kabushiki Kaisha Image capture apparatus and control method therefor
JP4551839B2 (ja) * 2004-08-16 2010-09-29 キヤノン株式会社 撮像装置及び撮像装置の制御方法
RU2295152C1 (ru) * 2005-09-15 2007-03-10 Роман Павлович Худеев Способ распознавания лица человека по видеоизображению
JP2008181439A (ja) * 2007-01-26 2008-08-07 Sanyo Electric Co Ltd 顔検出装置及び方法並びに撮像装置
KR100973588B1 (ko) 2008-02-04 2010-08-02 한국과학기술원 얼굴검출기의 부윈도우 설정방법
KR101105435B1 (ko) 2009-04-14 2012-01-17 경북대학교 산학협력단 얼굴 검출과 얼굴 인지 방법
CN103069431B (zh) * 2010-07-02 2016-11-16 英特尔公司 面部检测方法和设备
AU2013205535B2 (en) * 2012-05-02 2018-03-15 Samsung Electronics Co., Ltd. Apparatus and method of controlling mobile terminal based on analysis of user's face

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274573A1 (en) * 2006-05-26 2007-11-29 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN101488181A (zh) * 2008-01-15 2009-07-22 华晶科技股份有限公司 多方向的人脸检测方法
CN102096802A (zh) * 2009-12-11 2011-06-15 华为技术有限公司 人脸检测方法及装置
CN102270308A (zh) * 2011-07-21 2011-12-07 武汉大学 一种基于五官相关aam模型的面部特征定位方法
CN105095881A (zh) * 2015-08-21 2015-11-25 小米科技有限责任公司 人脸识别方法、装置及终端

Also Published As

Publication number Publication date
CN105095881A (zh) 2015-11-25
CN105095881B (zh) 2023-04-07
EP3133527A1 (en) 2017-02-22
MX2017008481A (es) 2017-10-31
KR20170033805A (ko) 2017-03-27
RU2017102521A3 (zh) 2018-07-26
RU2017102521A (ru) 2018-07-26
JP6374986B2 (ja) 2018-08-15
JP2017534090A (ja) 2017-11-16
US20170053156A1 (en) 2017-02-23
US10007841B2 (en) 2018-06-26
RU2664688C2 (ru) 2018-08-21

Similar Documents

Publication Publication Date Title
WO2017031901A1 (zh) 人脸识别方法、装置及终端
US9674395B2 (en) Methods and apparatuses for generating photograph
WO2021031609A1 (zh) 活体检测方法及装置、电子设备和存储介质
RU2577188C1 (ru) Способ, аппарат и устройство для сегментации изображения
US9959484B2 (en) Method and apparatus for generating image filter
CN105631797B (zh) 水印添加方法及装置
WO2017088470A1 (zh) 图像分类方法及装置
WO2016011747A1 (zh) 肤色调整方法和装置
CN107944447B (zh) 图像分类方法及装置
CN104918107B (zh) 视频文件的标识处理方法及装置
WO2017128767A1 (zh) 指纹模板录入方法及装置
RU2664003C2 (ru) Способ и устройство для определения ассоциированного пользователя
WO2020181728A1 (zh) 图像处理方法及装置、电子设备和存储介质
CN107944367B (zh) 人脸关键点检测方法及装置
EP2998960A1 (en) Method and device for video browsing
CN107730448B (zh) 基于图像处理的美颜方法及装置
CN109034150B (zh) 图像处理方法及装置
WO2017143776A1 (zh) 图片类型的识别方法及装置
WO2022077970A1 (zh) 特效添加方法及装置
CN113409342A (zh) 图像风格迁移模型的训练方法、装置及电子设备
CN110111281A (zh) 图像处理方法及装置、电子设备和存储介质
KR20190111034A (ko) 특징 이미지 획득 방법 및 디바이스, 및 사용자 인증 방법
WO2021057359A1 (zh) 图像处理方法、电子设备及可读存储介质
WO2020233201A1 (zh) 图标位置确定方法和装置
CN107507128B (zh) 图像处理方法及设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20167015669

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016567408

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017102521

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15902169

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: MX/A/2017/008481

Country of ref document: MX

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15902169

Country of ref document: EP

Kind code of ref document: A1