WO2017088266A1 - 图片处理方法及装置 - Google Patents

图片处理方法及装置 Download PDF

Info

Publication number
WO2017088266A1
WO2017088266A1 PCT/CN2015/099701 CN2015099701W WO2017088266A1 WO 2017088266 A1 WO2017088266 A1 WO 2017088266A1 CN 2015099701 W CN2015099701 W CN 2015099701W WO 2017088266 A1 WO2017088266 A1 WO 2017088266A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
target
picture
determining
preset
Prior art date
Application number
PCT/CN2015/099701
Other languages
English (en)
French (fr)
Inventor
陈志军
汪平仄
王百超
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to JP2016566784A priority Critical patent/JP2018506755A/ja
Priority to MX2017012839A priority patent/MX2017012839A/es
Priority to RU2017102520A priority patent/RU2665217C2/ru
Publication of WO2017088266A1 publication Critical patent/WO2017088266A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Definitions

  • the present disclosure relates to the field of image processing technologies, and in particular, to a picture processing method and apparatus.
  • the photo album program is one of the most commonly used programs on mobile terminals such as smartphones and tablets.
  • the photo album program is used to manage and display pictures in the mobile terminal.
  • the album program in the terminal can cluster the faces in the picture, and classify the same or similar faces as one album, thereby forming a face album.
  • An embodiment of the present disclosure provides a picture processing method and apparatus, including the following technical solutions:
  • a picture processing method including:
  • the facial feature information includes at least one of the following:
  • the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
  • the face feature information when the face feature information includes a location of the face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
  • the face feature information when the face feature information includes a location where the face is located in the picture or a face is in the place Depth information in the picture, and when the face is at least two,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the other faces are determined as the target face
  • the other faces are determined as non-target faces.
  • the face feature information when the face feature information includes a tilt angle of a face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
  • the face feature information when the face feature information includes a proportion of a region occupied by a face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
  • the face feature information when the face feature information includes the number of times the face appears in all current pictures,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
  • the method further includes:
  • a picture processing method including:
  • a detecting module configured to detect a picture and detect at least two faces included in the picture
  • An acquiring module configured to acquire facial feature information of each face detected by the detecting module in the picture
  • a determining module configured to determine each face as a target face or a non-target face according to the face feature information acquired by the acquiring module
  • a removing module configured to perform preset removal processing on the non-target face determined by the determining module.
  • the facial feature information includes at least one of the following:
  • the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
  • the determining module comprises:
  • a first area determining sub-module configured to determine a target shooting area according to a location of each face in the picture and a distribution of the face
  • a first determining submodule configured to determine a face in the target shooting area determined by the first area determining submodule as the target face, and determine a face outside the target shooting area as a non-target human face.
  • the determination module includes:
  • a second area determining sub-module configured to determine a target shooting area according to a location of each face in the picture and a distribution of the face
  • a calculation submodule configured to determine a face in the target shooting area as the target face, calculate a distance between another face in the picture and the target face, or calculate the image The difference between the depth information of other faces and the depth information of the target face;
  • a second determining submodule configured to determine the other face as a target face when the distance is less than a preset distance or the gap is less than a preset gap
  • a third determining submodule configured to determine the other human face as a non-target face when the distance is greater than or equal to a preset distance or the difference is greater than or equal to a preset gap.
  • the determining module when the facial feature information includes a tilt angle of a face in the picture, the determining module includes:
  • a fourth determining submodule configured to determine the face whose tilt angle is less than the preset angle as the target face
  • a fifth determining submodule configured to determine the face whose tilt angle is greater than or equal to the preset angle as a non-target face.
  • the determining module when the facial feature information includes a proportion of a region occupied by a face in the image, the determining module includes:
  • a sixth determining submodule configured to determine the face whose ratio is greater than a preset ratio as the target face
  • the seventh determining submodule is configured to determine the face whose ratio is less than or equal to the preset ratio as a non-target face.
  • the determining module when the facial feature information includes the number of times the face appears in all the current pictures, the determining module includes:
  • An eighth determining submodule configured to determine the face whose number of times is greater than the preset number of times as the target face
  • the ninth determining submodule is configured to determine the face whose number of times is less than or equal to the preset number of times as a non-target face.
  • the apparatus further includes:
  • the clustering processing module is configured to perform face clustering on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
  • the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is clustered.
  • a face album is obtained, a non-target face will not appear in the face album, thereby preventing some people who are not related to the user from being placed in the face album, thereby improving the user experience.
  • FIG. 1 is a flowchart of a picture processing method according to an exemplary embodiment.
  • FIG. 2 is a flowchart of step S103 in a picture processing method according to an exemplary embodiment.
  • FIG. 3 is a flowchart of step S103 in another picture processing method according to an exemplary embodiment.
  • FIG. 4 is a flowchart of step S103 in still another picture processing method according to an exemplary embodiment.
  • FIG. 5 is a flowchart of step S103 in still another image processing method according to an exemplary embodiment.
  • FIG. 6 is a flowchart of step S103 in still another image processing method according to an exemplary embodiment.
  • FIG. 7 is a flowchart of another image processing method according to an exemplary embodiment.
  • FIG. 8 is a block diagram of a picture processing apparatus according to an exemplary embodiment.
  • FIG. 9 is a block diagram of a determination module in a picture processing apparatus according to an exemplary embodiment.
  • FIG. 10 is a block diagram of a determination module in another picture processing apparatus according to an exemplary embodiment.
  • FIG. 11 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
  • FIG. 12 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
  • FIG. 13 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
  • FIG. 14 is a block diagram of another image processing apparatus according to an exemplary embodiment.
  • FIG. 15 is a block diagram suitable for a picture processing apparatus, according to an exemplary embodiment.
  • the embodiment of the present disclosure provides a picture processing method, which can be used in a terminal device. As shown in FIG. 1, the method includes steps S101-S104:
  • step S101 the picture is detected, and at least one face included in the picture is detected;
  • step S102 acquiring facial feature information of each face in the picture
  • each face is determined as a target face or a non-target face according to the face feature information
  • step S104 a preset removal process is performed on the non-target face.
  • the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is performed on the face.
  • Clustering when a face album is obtained, non-target faces will not appear in the face album, thereby avoiding some people who are not related to the user being placed in the face album, thereby improving the user experience.
  • the photographed photo may include a face of a passerby that the user does not want to photograph, that is, a non-target face, in addition to the target face that the user wants to photograph.
  • the target face and the non-target face in the picture are determined, and the non-target face is subjected to preset removal processing, so that the face album obtained by the cluster does not include the passerby that the user does not want to shoot. Thereby improving the user experience.
  • the facial feature information includes at least one of the following:
  • the face feature information may be a position where the face is located in the picture, a tilt angle of the face in the picture, depth information of the face in the picture, and a proportion of the area occupied by the face in the picture. , the number of times a face appears in all current pictures, and so on.
  • the target face and the non-target face are determined according to one piece of information or a plurality of pieces of information, so that the determination result has high accuracy.
  • the foregoing step S103 may include steps S201-S205:
  • step S201 the target shooting area is determined according to the position of each face in the picture and the distribution of the face;
  • step S202 the face in the target shooting area is determined as the target face, and the target shooting area is outside The face is determined to be a non-target face.
  • the target shooting area may be determined according to the position of the face in the picture and the distribution of the face, for example, determining the center of the picture.
  • a certain area of the location is the target area, and the face in the target shooting area can be determined as the target face, and the face outside the target area can be determined as the non-target face.
  • step S103 when the face feature information includes a location where a face is located in the picture or depth information of a face in the picture, and the face is at least In two cases, the above step S103 includes steps S301-S304:
  • step S301 the target shooting area is determined according to the position of each face in the picture and the distribution of the face;
  • step S302 the face in the target shooting area is determined as the target face, the distance between other faces in the picture and the target face is calculated, or the depth information of the other faces in the picture and the target face are calculated.
  • step S303 when the distance is less than the preset distance or the difference is less than the preset gap, the other faces are determined as the target faces;
  • step S304 when the distance is greater than or equal to the preset distance or the difference is greater than or equal to the preset gap, the other faces are determined as non-target faces.
  • the target shooting area may be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture, The face A in the central area of the picture is determined as the target face, and the distance between the other face B and the target face in the picture is calculated. If the distance is smaller than the preset distance, the face B is also determined as the target face.
  • the target face set is [A, B], and if there is still a face C in the picture, the distance between the face C and the target face set [A, B] is continuously calculated, if the face C and the target person
  • the distance of a face in the face collection [A, B] is less than the preset distance, it is determined as the target face, and so on, and all the faces in the picture are determined as the target face and the non-target face.
  • the target shooting area may also be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture,
  • the face A in the central area of the picture can be determined as the target face, and the difference between the depth information of other face B and the target face in the picture can be calculated. If the difference is smaller than the preset gap, otherwise, it is determined to be non- Target face, thus improving the accuracy of face determination.
  • step S103 when the face feature information includes the tilt angle of the face in the picture, the foregoing step S103 may further include steps S401-S402:
  • step S401 a face whose inclination angle is smaller than the preset angle is determined as the target face;
  • step S402 a face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
  • the face in the picture can be determined according to the tilt angle of the face in the picture, and when the tilt angle is less than the preset angle, it is determined as the target face, when the tilt angle is less than the preset angle , identify it as a non-target face. That is, the orientation of the face in the picture is determined.
  • the face feature point localization algorithm can be used to locate the feature points of each face to determine the orientation of each face, and the face of the face facing the lens can be determined as The target face, that is, the face that is oriented in the forward direction is determined as the target face, and if the face of the face exceeds a certain angle, it is determined as a non-target face.
  • the foregoing step S103 may further include steps S501-S502:
  • step S501 a face whose ratio is greater than a preset ratio is determined as a target face
  • step S502 a face whose ratio is less than or equal to a preset ratio is determined as a non-target face.
  • the face can also be determined according to the proportion of the area occupied by the face in the picture. For example, if the proportion of the face in the picture is large, it may indicate that it is the main subject, and Determining it as the target face, if the face is small in the picture, it may indicate that it is not the main subject, but the passerby who was accidentally photographed, and can be identified as a non-target person at this time. face.
  • step S103 may further include steps S601-S602:
  • step S601 the face whose number of times is greater than the preset number of times is determined as the target face
  • step S602 the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
  • the face appears in all the current pictures if it appears more times, it means that it is the target face, if the number of occurrences is small, for example, only one time, then Explain that it is a passerby, which was accidentally photographed, so as to accurately determine all faces in the picture as the target face and the non-target face.
  • the face may be determined as the target face or the non-target face by any two or more face feature information above: for example, when the face feature information includes the location and the person where the face is located in the picture.
  • the face determination method corresponding to the above two face feature information may be superimposed and used. For example, according to the position of each face in the picture and the distribution of the face, the target shooting area is determined, and the face in the target shooting area is determined as the target face, and the angle of the face is determined for the face outside the target shooting area.
  • a face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face, and a face whose tilt angle is smaller than the preset angle is determined as a non-target face.
  • another superposition method may also be adopted, such as determining the target shooting area according to the position of each face in the picture and the distribution of the face, and determining the face with the inclination angle smaller than the preset angle in the target area as the target person. Face, the face with the tilt angle greater than or equal to the preset angle is determined as the non-target face, and the face outside the target area is determined as the non-target face, thereby increasing the accuracy of the face division Sex.
  • the superposition usage of the other two or more facial feature information can be performed by referring to the above superposition method.
  • the foregoing method further includes step S701:
  • step S701 face clustering is performed on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
  • the face of the target face may be clustered to obtain a face album corresponding to each target face, and each face is clustered.
  • the face album corresponds to a person's face, which is convenient for the user to view.
  • FIG. 8 is a block diagram of a picture processing apparatus, which may be implemented as part or all of a terminal device by software, hardware, or a combination of both, according to an exemplary embodiment. As shown in FIG. 8, the image processing apparatus includes:
  • the detecting module 81 is configured to detect the picture and detect at least two faces included in the picture;
  • the obtaining module 82 is configured to acquire facial feature information of each face detected by the detecting module 81 in the picture;
  • the determining module 83 is configured to determine the each face as a target face or a non-target face according to the face feature information acquired by the obtaining module 82;
  • the removal module 84 is configured to perform a preset removal process on the non-target face determined by the determination module 83.
  • the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is performed.
  • Clustering when a face album is obtained, non-target faces will not appear in the face album, thereby avoiding some people who are not related to the user being placed in the face album, thereby improving the user experience.
  • the photographed photo may include a face of a passerby that the user does not want to photograph, that is, a non-target face, in addition to the target face that the user wants to photograph.
  • the target face and the non-target face in the picture are determined, and the non-target face is subjected to preset removal processing, so that the face album obtained by the cluster does not include the passerby that the user does not want to shoot. Thereby improving the user experience.
  • the facial feature information includes at least one of the following:
  • the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
  • the face feature information may be a position where the face is located in the picture, a tilt angle of the face in the picture, depth information of the face in the picture, and a proportion of the area occupied by the face in the picture. , the number of times a face appears in all current pictures, and so on.
  • the target face and the non-target face are determined according to one piece of information or a plurality of pieces of information, so that the determination result has high accuracy.
  • the determining module 83 includes:
  • the first area determining sub-module 91 is configured to determine the target photographing area according to the position of each face in the picture and the distribution of the face;
  • the first determining sub-module 92 is configured to determine a face in the target shooting area determined by the first determining sub-module as the target face, and to determine a face outside the target shooting area as a non-target human face.
  • the target shooting area may be determined according to the position of the face in the picture and the distribution of the face, for example, determining the center of the picture.
  • a certain area of the location is the target area, and the face in the target shooting area can be determined as the target face, and the face outside the target area can be determined as the non-target face.
  • the determining module 83 includes:
  • the second area determining sub-module 101 is configured to determine a target photographing area according to a position of each face in the picture and a face distribution situation;
  • the calculation sub-module 102 is configured to determine a face in the target photographing area as the target human face, calculate a distance between other faces in the picture and the target face, or calculate the picture The difference between the depth information of other faces in the face and the depth information of the target face;
  • the second determining sub-module 103 is configured to determine the other human face as the target human face when the distance is less than a preset distance or the difference is less than a preset gap;
  • the third determining sub-module 104 is configured to determine the other human face as a non-target face when the distance is greater than or equal to a preset distance or the difference is greater than or equal to a preset gap.
  • the target shooting area may be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture, The face A in the central area of the picture is determined as the target face, and the distance between the other face B and the target face in the picture is calculated. If the distance is smaller than the preset distance, the face B is also determined as the target face.
  • the target face set is [A, B], and if there is still a face C in the picture, the distance between the face C and the target face set [A, B] is continuously calculated, if the face C and the target person
  • the distance of a face in the face collection [A, B] is less than the preset distance, it is determined as the target face, and so on, and all the faces in the picture are determined as the target face and the non-target face.
  • the target shooting area may also be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture,
  • the face A in the central area of the picture can be determined as the target face, and the difference between the depth information of other face B and the target face in the picture can be calculated. If the difference is smaller than the preset gap, otherwise, it is determined to be non- Target face, thus improving the accuracy of face determination Authenticity.
  • the determining module 83 when the facial feature information includes a tilt angle of a face in the picture, the determining module 83 includes:
  • the fourth determining sub-module 111 is configured to determine the face whose tilt angle is less than the preset angle as the target human face;
  • the fifth determining sub-module 112 is configured to determine the face whose tilt angle is greater than or equal to the preset angle as a non-target face.
  • the face in the picture can be determined according to the tilt angle of the face in the picture, and when the tilt angle is less than the preset angle, it is determined as the target face, when the tilt angle is less than the preset angle , identify it as a non-target face. That is, the orientation of the face in the picture is determined.
  • the face feature point localization algorithm can be used to locate the feature points of each face to determine the orientation of each face, and the face of the face facing the lens can be determined as The target face, that is, the face that is oriented in the forward direction is determined as the target face, and if the face of the face exceeds a certain angle, it is determined as a non-target face.
  • the determining module 83 when the facial feature information includes a proportion of a region occupied by a face in the image, the determining module 83 includes:
  • a sixth determining sub-module 121 configured to determine, as the target human face, the face whose ratio is greater than a preset ratio
  • the seventh determining sub-module 122 is configured to determine the face whose ratio is less than or equal to the preset ratio as a non-target face.
  • the face can also be determined according to the proportion of the area occupied by the face in the picture. For example, if the proportion of the face in the picture is large, it may indicate that it is the main subject, and Determining it as the target face, if the face is small in the picture, it may indicate that it is not the main subject, but the passerby who was accidentally photographed, and can be identified as a non-target person at this time. face.
  • the determining module 83 when the facial feature information includes the number of times a face appears in all current pictures, the determining module 83 includes:
  • the eighth determining sub-module 131 is configured to determine the face whose number of times is greater than the preset number of times as the target face;
  • the ninth determining sub-module 132 is configured to determine the face whose number of times is less than or equal to the preset number of times as a non-target face.
  • the face appears in all the current pictures if it appears more times, it means that it is the target face, if it appears less frequently, for example, only once. It means that it is a passerby, it is accidentally photographed, so that all the faces in the picture are accurately determined as the target face and the non-target face.
  • the foregoing apparatus further includes:
  • the clustering processing module 141 is configured to perform face clustering on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
  • the face of the target face may be clustered to obtain a face album corresponding to each target face, and each face is clustered.
  • the face album corresponds to a person’s face, thus Easy for users to view.
  • a picture processing apparatus including:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the above processor can also be configured to:
  • the face feature information includes at least one of the following:
  • the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
  • the above processor can also be configured to:
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
  • the above processor can also be configured to:
  • the face feature information includes a location where the face is located in the picture or depth information of the face in the picture, and the face is at least two
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the other faces are determined as the target face
  • the other faces are determined as non-target faces.
  • the above processor can also be configured to:
  • the face feature information includes a tilt angle of a face in the picture
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
  • the above processor can also be configured to:
  • the face feature information includes a proportion of a face occupied by the face in the picture
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
  • the above processor can also be configured to:
  • the face feature information includes the number of times the face appears in all current pictures
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
  • the above processor can also be configured to:
  • the method further includes:
  • FIG. 15 is a block diagram of a picture processing apparatus, which is applicable to a terminal device, according to an exemplary embodiment.
  • device 1500 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • Apparatus 1500 can include one or more of the following components: processing component 1502, memory 1504, power component 1506, multimedia component 1508, audio component 1510, input/output (I/O) interface 1515, sensor component 1514, and communication component 1516 .
  • Processing component 1502 typically controls the overall operation of device 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 1502 can include one or more processors 1520 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 1502 can include one or more modules to facilitate interaction between component 1502 and other components.
  • processing component 1502 can include a multimedia module to facilitate interaction between multimedia component 1508 and processing component 1502.
  • Memory 1504 is configured to store various types of data to support operation at device 1500. Examples of such data include instructions for any application or method operating on device 1500, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 1504 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 1506 provides power to various components of device 1500.
  • Power component 1506 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1500.
  • Multimedia component 1508 includes a screen between the device 1500 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1508 includes a front camera and/or a rear camera. When the device 1500 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1510 is configured to output and/or input an audio signal.
  • the audio component 1510 includes a microphone (MIC) that is configured to receive an external audio signal when the device 1500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 1504 or transmitted via communication component 1516.
  • audio component 1510 also includes a speaker for outputting an audio signal.
  • the I/O interface 1515 provides an interface between the processing component 1502 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1514 includes one or more sensors for providing device 1500 with a status assessment of various aspects.
  • sensor assembly 1514 can detect an open/closed state of device 1500, relative positioning of components, such as the display and keypad of device 1500, and sensor component 1514 can also detect a change in position of one component of device 1500 or device 1500. The presence or absence of contact by the user with the device 1500, the orientation or acceleration/deceleration of the device 1500 and the temperature change of the device 1500.
  • Sensor assembly 1514 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 1514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1514 can also include Speed sensor, gyro sensor, magnetic sensor, pressure sensor or temperature sensor.
  • Communication component 1516 is configured to facilitate wired or wireless communication between device 1500 and other devices.
  • the device 1500 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 1516 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 1516 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 1500 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 1504 comprising instructions executable by processor 1520 of apparatus 1500 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of the apparatus 1500, to enable the apparatus 1500 to perform the image processing method described above, the method comprising:
  • the facial feature information includes at least one of the following:
  • the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
  • the face feature information when the face feature information includes a location of the face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
  • the face feature information includes a location where the face is located in the picture or depth information of the face in the picture, and the face is at least two
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the other faces are determined as the target face
  • the other faces are determined as non-target faces.
  • the face feature information when the face feature information includes a tilt angle of a face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
  • the face feature information when the face feature information includes a proportion of a region occupied by a face in the picture,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
  • the face feature information when the face feature information includes the number of times the face appears in all current pictures,
  • Determining each face as a target face or a non-target face according to the face feature information includes:
  • the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
  • the method further includes:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种图片处理方法及装置,其中,方法包括:对图片进行检测,检测出图片中包含的至少一个人脸(S101);获取每个人脸在所述图片中的人脸特征信息(S102);根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸(S103);对所述非目标人脸进行预设去除处理(S104)。通过该技术方案,对图片中的多个人脸进行识别,并将所有人脸确定成目标人脸和非目标人脸,对非目标人脸进行预设去除处理,这样,在对人脸进行聚类,得到人脸相册时,人脸相册中就不会出现非目标人脸,从而避免一些与用户不相干的人被放入人脸相册,提升用户的使用体验。

Description

图片处理方法及装置
相关申请的交叉引用
本申请基于申请号为201510847294.6、申请日为2015年11月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及图片处理技术领域,尤其涉及一种图片处理方法及装置。
背景技术
相册程序是诸如智能手机、平板电脑之类的移动终端上最为常用的程序之一。相册程序用来管理和显示移动终端中的图片。
目前,终端中的相册程序可以对图片中的人脸进行聚类,将相同或相似的人脸归为一个相册,从而形成人脸相册。
发明内容
本公开实施例提供一种图片处理方法及装置,包括如下技术方案:
根据本公开实施例的第一方面,提供一种图片处理方法,包括:
对图片进行检测,检测出图片中包含的至少一个人脸;
获取每个人脸在所述图片中的人脸特征信息;
根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
对所述非目标人脸进行预设去除处理。
在一个实施例中,所述人脸特征信息包括以下至少一项:
人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
在一个实施例中,当所述人脸特征信息包括所述人脸在所述图片中所处的位置时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所 述图片中的深度信息,且所述人脸为至少两个时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述倾斜角度小于预设角度的人脸确定为目标人脸;
将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述比例大于预设比例的人脸确定为目标人脸;
将所述比例小于或等于预设比例的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述次数大于预设次数的人脸确定为目标人脸;
将所述次数小于或等于预设次数的人脸确定为非目标人脸。
在一个实施例中,所述方法还包括:
对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
根据本公开实施例的第二方面,提供一种图片处理方法,包括:
检测模块,用于对图片进行检测,检测出图片中包含的至少两个人脸;
获取模块,用于获取所述检测模块检测出的每个人脸在所述图片中的人脸特征信息;
确定模块,用于根据所述获取模块获取的所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
去除模块,用于对所述确定模块确定的所述非目标人脸进行预设去除处理。
在一个实施例中,所述人脸特征信息包括以下至少一项:
人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
在一个实施例中,所述确定模块包括:
第一区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
第一确定子模块,用于将所述第一区域确定子模块确定的所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,所述确定模块包括:
第二区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
计算子模块,用于将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
第二确定子模块,用于在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
第三确定子模块,用于在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,所述确定模块包括:
第四确定子模块,用于将所述倾斜角度小于预设角度的人脸确定为目标人脸;
第五确定子模块,用于将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,所述确定模块包括:
第六确定子模块,用于将所述比例大于预设比例的人脸确定为目标人脸;
第七确定子模块,用于将所述比例小于或等于预设比例的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,所述确定模块包括:
第八确定子模块,用于将所述次数大于预设次数的人脸确定为目标人脸;
第九确定子模块,用于将所述次数小于或等于预设次数的人脸确定为非目标人脸。
在一个实施例中,所述装置还包括:
聚类处理模块,用于对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
本公开的实施例提供的技术方案可以包括以下有益效果:
上述技术方案,对图片中的多个人脸进行识别,并将每个人脸确定为目标人脸或非目标人脸,对非目标人脸进行预设去除处理,这样,在对人脸进行聚类,得到人脸相册时,人脸相册中就不会出现非目标人脸,从而避免一些与用户不相干的人被放入人脸相册,提升用户的使用体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的图片处理方法的流程图。
图2是根据一示例性实施例示出的一种图片处理方法中步骤S103的流程图。
图3是根据一示例性实施例示出的另一种图片处理方法中步骤S103的流程图。
图4是根据一示例性实施例示出的又一种图片处理方法中步骤S103的流程图。
图5是根据一示例性实施例示出的再一种图片处理方法中步骤S103的流程图。
图6是根据一示例性实施例示出的再一种图片处理方法中步骤S103的流程图。
图7是根据一示例性实施例示出的另一种图片处理方法的流程图。
图8是根据一示例性实施例示出的图片处理装置的框图。
图9是根据一示例性实施例示出的一种图片处理装置中确定模块的框图。
图10是根据一示例性实施例示出的另一种图片处理装置中确定模块的框图。
图11是根据一示例性实施例示出的再一种图片处理装置中确定模块的框图。
图12是根据一示例性实施例示出的又一种图片处理装置中确定模块的框图。
图13是根据一示例性实施例示出的又一种图片处理装置中确定模块的框图。
图14是根据一示例性实施例示出的另一种图片处理装置的框图。
图15是根据一示例性实施例示出的适用于图片处理装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开实施例提供了一种图片处理方法,该方法可用于终端设备中,如图1所示,该方法包括步骤S101-S104:
在步骤S101中,对图片进行检测,检测出图片中包含的至少一个人脸;
在步骤S102中,获取每个人脸在图片中的人脸特征信息;
在步骤S103中,根据人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
在步骤S104中,对非目标人脸进行预设去除处理。
在该实施例中,对图片中的多个人脸进行识别,并将每个人脸确定为目标人脸或非目标人脸,对非目标人脸进行预设去除处理,这样,在对人脸进行聚类,得到人脸相册时,人脸相册中就不会出现非目标人脸,从而避免一些与用户不相干的人被放入人脸相册,提升用户的使用体验。
例如,当用户在人较多的场景拍照时,拍摄出的照片中除了包含用户想要拍摄的目标人脸外,还可能会包含用户不想拍摄的路人的人脸,即非目标人脸,本公开中对图片中的目标人脸和非目标人脸进行确定,并对非目标人脸进行预设去除处理,这样,可以使聚类得到的人脸相册中不会包含用户不想拍摄的路人,从而提升用户的使用体验。
在一个实施例中,人脸特征信息包括以下至少一项:
人脸在图片中所处的位置、人脸在图片中的倾斜角度、人脸在图片中的深度信息、人脸在图片中所占区域的比例和人脸在当前所有图片中出现的次数。
在该实施例中,人脸特征信息可以是人脸在图片中所处的位置、人脸在图片中的倾斜角度、人脸在图片中的深度信息、人脸在图片中所占区域的比例、人脸在当前所有图片中出现的次数等。根据上述一项信息或多项信息确定目标人脸和非目标人脸,从而使得确定结果具有较高的准确性。
下面详细说明不同的人脸特征信息对应的不同的确定方法。
如图2所示,在一个实施例中,当人脸特征信息包括人脸在图片中所处的位置时,上述步骤S103可以包括步骤S201-S205:
在步骤S201中,根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
在步骤S202中,将目标拍摄区域中的人脸确定为目标人脸,将所述目标拍摄区域外的 人脸确定为非目标人脸。
在该实施例中,无论图片中包含的人脸个数为多个或者单个时,都可以根据人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,例如,确定图片的中心位置的某个区域为目标区域,目标拍摄区域中的人脸都可以被确定为目标人脸,目标区域外的人脸都可以被确定为非目标人脸。
如图3所示,在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,上述步骤S103包括步骤S301-S304:
在步骤S301中,根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
在步骤S302中,将目标拍摄区域中的人脸确定为目标人脸,计算图片中的其他人脸与目标人脸之间的距离,或者计算图片中的其他人脸的深度信息与目标人脸的深度信息之间的差距;
在步骤S303中,在距离小于预设距离或者差距小于预设差距时,将其他人脸确定为目标人脸;
在步骤S304中,在距离大于或等于预设距离或者差距大于或等于预设差距时,将其他人脸确定为非目标人脸。
在该实施例中,当图片中包括至少两个人脸时,可以根据每个人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,比如目标拍摄区域为图片的中心区域,则可以将图片的中心区域中的人脸A确定为目标人脸,计算图片中其他人脸B和目标人脸之间的距离,如果距离小于预设距离,则将人脸B也确定为目标人脸,此时,目标人脸集合为[A,B],如果图片中还存在人脸C,则继续计算人脸C和目标人脸集合[A,B]的距离,如果人脸C和目标人脸集合[A,B]中的一张人脸的距离小于预设距离时,则确定为目标人脸,以此类推,将图片中的所有人脸确定成目标人脸和非目标人脸,从而提高人脸确定的准确性。
在该实施例中,当图片中包括至少两个人脸时,还可以根据每个人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,比如目标拍摄区域为图片的中心区域,则可以将图片的中心区域中的人脸A确定为目标人脸,计算图片中其他人脸B和目标人脸的深度信息之间的差距,如果差距小于预设差距,否则,将其确定为非目标人脸,从而提高人脸确定的准确性。
如图4所示,在一个实施例中,当人脸特征信息包括人脸在图片中的倾斜角度时,上述步骤S103还可以包括步骤S401-S402:
在步骤S401中,将倾斜角度小于预设角度的人脸确定为目标人脸;
在步骤S402中,将倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
在该实施例中,可以根据人脸在图片中的倾斜角度对图片中的人脸进行确定,在倾斜角度小于预设角度时,将其确定为目标人脸,在倾斜角度小于预设角度时,将其确定为非目标人脸。即确定图片中人脸的朝向,其中,可以采用人脸特征点定位算法,对每张人脸进行特征点定位,从而确定每张人脸的朝向,可以将人脸朝向镜头的人脸确定为目标人脸,即朝向为正向的人脸确定为目标人脸,如果人脸侧脸超过一定角度,就将其确定为非目标人脸。
如图5所示,在一个实施例中,当人脸特征信息包括人脸在图片中所占区域的比例时,上述步骤S103还可以包括步骤S501-S502:
在步骤S501中,将比例大于预设比例的人脸确定为目标人脸;
在步骤S502中,将比例小于或等于预设比例的人脸确定为非目标人脸。
在该实施例中,还可以根据人脸在图片中所占区域的比例对人脸进行确定,比如,人脸在图片中所占的比例较大,则可能说明其是主要被拍摄对象,可以将其确定为目标人脸,如果人脸在图片中所占的比例较小,则可能说明其不是主要被拍摄对象,只是不小心被拍摄到的路人,此时可以将其确定为非目标人脸。
如图6所示,在一个实施例中,当人脸特征信息包括人脸在当前所有图片中出现的次数时,上述步骤S103还可以包括步骤S601-S602:
在步骤S601中,将次数大于预设次数的人脸确定为目标人脸;
在步骤S602中,将次数小于或等于预设次数的人脸确定为非目标人脸。
在该实施例中,还可以根据人脸在当前所有图片中出现的次数,如果其出现的次数较多,则说明其是目标人脸,如果出现的次数较少,比如,只有1次,则说明其是路人,是被偶然拍摄到的,从而准确的将图片中的所有人脸确定为目标人脸和非目标人脸。
当然,还可以通过以上的任意两项或者多项人脸特征信息将人脸确定为目标人脸或非目标人脸:例如,当人脸特征信息包括人脸在图片中所处的位置和人脸在图片中的倾斜角度时,可以将上述两个人脸特征信息对应的人脸确定方法叠加使用。如根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域,将目标拍摄区域中的人脸确定为目标人脸,对于目标拍摄区域外的人脸,判断其倾斜角度,将倾斜角度大于或等于预设角度的人脸确定为非目标人脸,将倾斜角度小于预设角度的人脸确定为非目标人脸。当然还可以采用另外一种叠加方法,如根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域,将目标区域中倾斜角度小于预设角度的人脸确定为目标人脸,将倾斜角度大于或等于预设角度的人脸确定为非目标人脸,将目标区域外的人脸确定为非目标人脸,从而增加人脸划分的准确 性。其他两项或多项人脸特征信息的叠加使用情况可参照上述叠加方法执行。
如图7所示,在一个实施例中,上述方法还包括步骤S701:
在步骤S701中,对目标人脸进行人脸聚类,以得到目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
在该实施例中,在对非目标人脸进行预设去除处理操作后,可以对目标人脸进行人脸聚类,得到每个目标人脸对应的人脸相册,人脸聚类后每个人脸相册对应一个人的人脸,从而便于用户的查看。
下述为本公开装置实施例,可以用于执行本公开方法实施例。
图8是根据一示例性实施例示出的一种图片处理装置的框图,该装置可以通过软件、硬件或者两者的结合实现成为终端设备的部分或者全部。如图8所示,该图片处理装置包括:
检测模块81被配置为对图片进行检测,检测出图片中包含的至少两个人脸;
获取模块82被配置为获取所述检测模块81检测出的每个人脸在所述图片中的人脸特征信息;
确定模块83被配置为根据所述获取模块82获取的所述人脸特征信息将所述每个人脸确定为目标人脸或非目标人脸;
去除模块84被配置为对所述确定模块83确定的所述非目标人脸进行预设去除处理。
在该实施例中,对图片中的多个人脸进行识别,并将每个人脸确定成目标人脸或非目标人脸,对非目标人脸进行预设去除处理,这样,在对人脸进行聚类,得到人脸相册时,人脸相册中就不会出现非目标人脸,从而避免一些与用户不相干的人被放入人脸相册,提升用户的使用体验。
例如,当用户在人较多的场景拍照时,拍摄出的照片中除了包含用户想要拍摄的目标人脸外,还可能会包含用户不想拍摄的路人的人脸,即非目标人脸,本公开中对图片中的目标人脸和非目标人脸进行确定,并对非目标人脸进行预设去除处理,这样,可以使聚类得到的人脸相册中不会包含用户不想拍摄的路人,从而提升用户的使用体验。
在一个实施例中,所述人脸特征信息包括以下至少一项:
人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
在该实施例中,人脸特征信息可以是人脸在图片中所处的位置、人脸在图片中的倾斜角度、人脸在图片中的深度信息、人脸在图片中所占区域的比例、人脸在当前所有图片中出现的次数等。根据上述一项信息或多项信息确定目标人脸和非目标人脸,从而使得确定结果具有较高的准确性。
如图9所示,在一个实施例中,所述确定模块83包括:
第一区域确定子模块91被配置为根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
第一确定子模块92被配置为将所述第一确定子模块确定的所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
在该实施例中,无论图片中包含的人脸个数为多个或者单个时,都可以根据人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,例如,确定图片的中心位置的某个区域为目标区域,目标拍摄区域中的人脸都可以被确定为目标人脸,目标区域外的人脸都可以被确定为非目标人脸。
如图10所示,在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,所述确定模块83包括:
第二区域确定子模块101被配置为根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
计算子模块102被配置为将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
第二确定子模块103被配置为在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
第三确定子模块104被配置为在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
在该实施例中,当图片中包括至少两个人脸时,可以根据每个人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,比如目标拍摄区域为图片的中心区域,则可以将图片的中心区域中的人脸A确定为目标人脸,计算图片中其他人脸B和目标人脸之间的距离,如果距离小于预设距离,则将人脸B也确定为目标人脸,此时,目标人脸集合为[A,B],如果图片中还存在人脸C,则继续计算人脸C和目标人脸集合[A,B]的距离,如果人脸C和目标人脸集合[A,B]中的一张人脸的距离小于预设距离时,则确定为目标人脸,以此类推,将图片中的所有人脸确定成目标人脸和非目标人脸,从而提高人脸确定的准确性。
在该实施例中,当图片中包括至少两个人脸时,还可以根据每个人脸在图片中所处的位置和人脸分布情况确定目标拍摄区域,比如目标拍摄区域为图片的中心区域,则可以将图片的中心区域中的人脸A确定为目标人脸,计算图片中其他人脸B和目标人脸的深度信息之间的差距,如果差距小于预设差距,否则,将其确定为非目标人脸,从而提高人脸确定的准 确性。
如图11所示,在一个实施例中,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,所述确定模块83包括:
第四确定子模块111被配置为将所述倾斜角度小于预设角度的人脸确定为目标人脸;
第五确定子模块112被配置为将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
在该实施例中,可以根据人脸在图片中的倾斜角度对图片中的人脸进行确定,在倾斜角度小于预设角度时,将其确定为目标人脸,在倾斜角度小于预设角度时,将其确定为非目标人脸。即确定图片中人脸的朝向,其中,可以采用人脸特征点定位算法,对每张人脸进行特征点定位,从而确定每张人脸的朝向,可以将人脸朝向镜头的人脸确定为目标人脸,即朝向为正向的人脸确定为目标人脸,如果人脸侧脸超过一定角度,就将其确定为非目标人脸。
如图12所示,在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,所述确定模块83包括:
第六确定子模块121,用于将所述比例大于预设比例的人脸确定为目标人脸;
第七确定子模块122,用于将所述比例小于或等于预设比例的人脸确定为非目标人脸。
在该实施例中,还可以根据人脸在图片中所占区域的比例对人脸进行确定,比如,人脸在图片中所占的比例较大,则可能说明其是主要被拍摄对象,可以将其确定为目标人脸,如果人脸在图片中所占的比例较小,则可能说明其不是主要被拍摄对象,只是不小心被拍摄到的路人,此时可以将其确定为非目标人脸。
如图13所示,在一个实施例中,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,所述确定模块83包括:
第八确定子模块131,用于将所述次数大于预设次数的人脸确定为目标人脸;
第九确定子模块132,用于将所述次数小于或等于预设次数的人脸确定为非目标人脸。
在该实施例中,还可以根据人脸在当前所有图片中出现的次数,如果其出现的次数较多,则说明其是目标人脸,如果其出现的次数较少,比如,只有1次,则说明其是路人,是被偶然拍摄到的,从而准确的将图片中的所有人脸确定为目标人脸和非目标人脸。
如图14所示,在一个实施例中,上述装置还包括:
聚类处理模块141,用于对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
在该实施例中,在对非目标人脸进行预设去除处理操作后,可以对目标人脸进行人脸聚类,得到每个目标人脸对应的人脸相册,人脸聚类后每个人脸相册对应一个人的人脸,从而 便于用户的查看。
根据本公开实施例的第三方面,提供一种图片处理装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,处理器被配置为:
对图片进行检测,检测出图片中包含的至少两个人脸;
获取每个人脸在所述图片中的人脸特征信息;
根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
对所述非目标人脸进行预设去除处理。
上述处理器还可被配置为:
所述人脸特征信息包括以下至少一项:
人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
上述处理器还可被配置为:
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
上述处理器还可被配置为:
当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
上述处理器还可被配置为:
当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述倾斜角度小于预设角度的人脸确定为目标人脸;
将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
上述处理器还可被配置为:
当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述比例大于预设比例的人脸确定为目标人脸;
将所述比例小于或等于预设比例的人脸确定为非目标人脸。
上述处理器还可被配置为:
当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述次数大于预设次数的人脸确定为目标人脸;
将所述次数小于或等于预设次数的人脸确定为非目标人脸。
上述处理器还可被配置为:
所述方法还包括:
对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图15是根据一示例性实施例示出的一种用于图片处理装置的框图,该装置适用于终端设备。例如,装置1500可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
装置1500可以包括以下一个或多个组件:处理组件1502,存储器1504,电源组件1506,多媒体组件1508,音频组件1510,输入/输出(I/O)的接口1515,传感器组件1514,以及通信组件1516。
处理组件1502通常控制装置1500的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1502可以包括一个或多个处理器1520来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1502可以包括一个或多个模块,便于处理组件1502和其他组件之间的交互。例如,处理组件1502可以包括多媒体模块,以方便多媒体组件1508和处理组件1502之间的交互。
存储器1504被配置为存储各种类型的数据以支持在设备1500的操作。这些数据的示例包括用于在装置1500上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件1506为装置1500的各种组件提供电力。电力组件1506可以包括电源管理系统,一个或多个电源,及其他与为装置1500生成、管理和分配电力相关联的组件。
多媒体组件1508包括在所述装置1500和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1508包括一个前置摄像头和/或后置摄像头。当设备1500处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1510被配置为输出和/或输入音频信号。例如,音频组件1510包括一个麦克风(MIC),当装置1500处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1504或经由通信组件1516发送。在一些实施例中,音频组件1510还包括一个扬声器,用于输出音频信号。
I/O接口1515为处理组件1502和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1514包括一个或多个传感器,用于为装置1500提供各个方面的状态评估。例如,传感器组件1514可以检测到设备1500的打开/关闭状态,组件的相对定位,例如所述组件为装置1500的显示器和小键盘,传感器组件1514还可以检测装置1500或装置1500一个组件的位置改变,用户与装置1500接触的存在或不存在,装置1500方位或加速/减速和装置1500的温度变化。传感器组件1514可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1514还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1514还可以包括加 速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1516被配置为便于装置1500和其他设备之间有线或无线方式的通信。装置1500可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件1516经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1516还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置1500可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子组件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1504,上述指令可由装置1500的处理器1520执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置1500的处理器执行时,使得装置1500能够执行上述图片处理方法,所述方法包括:
对图片进行检测,检测出图片中包含的至少两个人脸;
获取每个人脸在所述图片中的人脸特征信息;
根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
对所述非目标人脸进行预设去除处理。
在一个实施例中,所述人脸特征信息包括以下至少一项:
人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
在一个实施例中,当所述人脸特征信息包括所述人脸在所述图片中所处的位置时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述倾斜角度小于预设角度的人脸确定为目标人脸;
将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述比例大于预设比例的人脸确定为目标人脸;
将所述比例小于或等于预设比例的人脸确定为非目标人脸。
在一个实施例中,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,
所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
将所述次数大于预设次数的人脸确定为目标人脸;
将所述次数小于或等于预设次数的人脸确定为非目标人脸。
在一个实施例中,所述方法还包括:
对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (17)

  1. 一种图片处理方法,其特征在于,包括:
    对图片进行检测,检测出图片中包含的至少一个人脸;
    获取每个人脸在所述图片中的人脸特征信息;
    根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
    对所述非目标人脸进行预设去除处理。
  2. 根据权利要求1所述的方法,其特征在于,所述人脸特征信息包括以下至少一项:
    人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
    根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
    将所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
  4. 根据权利要求2所述的方法,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,
    所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
    根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
    将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
    在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
    在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
  5. 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,
    所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
    将所述倾斜角度小于预设角度的人脸确定为目标人脸;
    将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
  6. 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,
    所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
    将所述比例大于预设比例的人脸确定为目标人脸;
    将所述比例小于或等于预设比例的人脸确定为非目标人脸。
  7. 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,
    所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:
    将所述次数大于预设次数的人脸确定为目标人脸;
    将所述次数小于或等于预设次数的人脸确定为非目标人脸。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
  9. 一种图片处理装置,其特征在于,包括:
    检测模块,用于对图片进行检测,检测出图片中包含的至少一个人脸;
    获取模块,用于获取所述检测模块检测出的每个人脸在所述图片中的人脸特征信息;
    确定模块,用于根据所述获取模块获取的所述人脸特征信息将所述每个人脸确定为目标人脸或非目标人脸;
    去除模块,用于对所述确定模块确定的所述非目标人脸进行预设去除处理。
  10. 根据权利要求9所述的装置,其特征在于,所述人脸特征信息包括以下至少一项:
    人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
  11. 根据权利要求10所述的装置,其特征在于,所述确定模块包括:
    第一区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;
    第一确定子模块,用于将所述第一区域确定子模块确定的所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
  12. 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,所述确定模块包括:
    第二区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目 标拍摄区域;
    计算子模块,用于将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;
    第二确定子模块,用于在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;
    第三确定子模块,用于在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
  13. 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,所述确定模块包括:
    第四确定子模块,用于将所述倾斜角度小于预设角度的人脸确定为目标人脸;
    第五确定子模块,用于将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
  14. 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,所述确定模块包括:
    第六确定子模块,用于将所述比例大于预设比例的人脸确定为目标人脸;
    第七确定子模块,用于将所述比例小于或等于预设比例的人脸确定为非目标人脸。
  15. 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,所述确定模块包括:
    第八确定子模块,用于将所述次数大于预设次数的人脸确定为目标人脸;
    第九确定子模块,用于将所述次数小于或等于预设次数的人脸确定为非目标人脸。
  16. 根据权利要求9至15中任一项所述的装置,其特征在于,所述装置还包括:
    聚类处理模块,用于对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
  17. 一种图片处理装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    对图片进行检测,检测出图片中包含的至少两个人脸;
    获取每个人脸在所述图片中的人脸特征信息;
    根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;
    对所述非目标人脸进行预设去除处理。
PCT/CN2015/099701 2015-11-26 2015-12-30 图片处理方法及装置 WO2017088266A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2016566784A JP2018506755A (ja) 2015-11-26 2015-12-30 画像処理方法、画像処理方法、コンピュータプログラム、及びコンピュータ読み取り可能な記憶媒体
MX2017012839A MX2017012839A (es) 2015-11-26 2015-12-30 Metodo y aparato de procesamiento de imagenes.
RU2017102520A RU2665217C2 (ru) 2015-11-26 2015-12-30 Способ и устройство обработки изображений

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510847294.6 2015-11-26
CN201510847294.6A CN105260732A (zh) 2015-11-26 2015-11-26 图片处理方法及装置

Publications (1)

Publication Number Publication Date
WO2017088266A1 true WO2017088266A1 (zh) 2017-06-01

Family

ID=55100413

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099701 WO2017088266A1 (zh) 2015-11-26 2015-12-30 图片处理方法及装置

Country Status (7)

Country Link
US (1) US20170154206A1 (zh)
EP (1) EP3173970A1 (zh)
JP (1) JP2018506755A (zh)
CN (1) CN105260732A (zh)
MX (1) MX2017012839A (zh)
RU (1) RU2665217C2 (zh)
WO (1) WO2017088266A1 (zh)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827952B (zh) * 2016-02-01 2019-05-17 维沃移动通信有限公司 一种去除指定对象的拍照方法及移动终端
CN107122356B (zh) * 2016-02-24 2020-10-09 北京小米移动软件有限公司 显示人脸颜值的方法及装置、电子设备
CN105744165A (zh) * 2016-02-25 2016-07-06 深圳天珑无线科技有限公司 拍照方法、装置及终端
CN106453853A (zh) * 2016-09-22 2017-02-22 深圳市金立通信设备有限公司 一种拍照方法及终端
CN106791449B (zh) * 2017-02-27 2020-02-11 努比亚技术有限公司 照片拍摄方法及装置
CN107578006B (zh) * 2017-08-31 2020-06-23 维沃移动通信有限公司 一种照片处理方法及移动终端
CN108875522B (zh) * 2017-12-21 2022-06-10 北京旷视科技有限公司 人脸聚类方法、装置和系统及存储介质
CN108182714B (zh) * 2018-01-02 2023-09-15 腾讯科技(深圳)有限公司 图像处理方法及装置、存储介质
CN110348272B (zh) * 2018-04-03 2024-08-20 北京京东尚科信息技术有限公司 动态人脸识别的方法、装置、系统和介质
CN109034106B (zh) * 2018-08-15 2022-06-10 北京小米移动软件有限公司 人脸数据清洗方法及装置
CN109040588A (zh) * 2018-08-16 2018-12-18 Oppo广东移动通信有限公司 人脸图像的拍照方法、装置、存储介质及终端
CN109190539B (zh) * 2018-08-24 2020-07-07 阿里巴巴集团控股有限公司 人脸识别方法及装置
CN109784157B (zh) * 2018-12-11 2021-10-29 口碑(上海)信息技术有限公司 一种图像处理方法、装置及系统
CN110533773A (zh) * 2019-09-02 2019-12-03 北京华捷艾米科技有限公司 一种三维人脸重建方法、装置及相关设备
CN111401315B (zh) * 2020-04-10 2023-08-22 浙江大华技术股份有限公司 基于视频的人脸识别方法、识别装置及存储装置
CN114418865A (zh) * 2020-10-28 2022-04-29 北京小米移动软件有限公司 图像处理方法、装置、设备及存储介质
CN115118866A (zh) * 2021-03-22 2022-09-27 深圳市万普拉斯科技有限公司 一种图像拍摄方法、装置和智能终端
CN114399622A (zh) * 2022-03-23 2022-04-26 荣耀终端有限公司 图像处理方法和相关装置
CN116541550B (zh) * 2023-07-06 2024-07-02 广州方图科技有限公司 一种自助拍照设备照片分类方法、装置、电子设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009048234A (ja) * 2007-08-13 2009-03-05 Takumi Vision株式会社 顔認識システム及び顔認識方法
CN104408426A (zh) * 2014-11-27 2015-03-11 小米科技有限责任公司 人脸图像眼镜去除方法及装置
CN104484858A (zh) * 2014-12-31 2015-04-01 小米科技有限责任公司 人物图像处理方法及装置
CN104601876A (zh) * 2013-10-30 2015-05-06 纬创资通股份有限公司 路人侦测方法与装置
CN104794462A (zh) * 2015-05-11 2015-07-22 北京锤子数码科技有限公司 一种人物图像处理方法及装置
CN104820675A (zh) * 2015-04-08 2015-08-05 小米科技有限责任公司 相册显示方法及装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100601997B1 (ko) * 2004-10-12 2006-07-18 삼성전자주식회사 인물기반 디지털 사진 클러스터링 방법 및 장치와 이를이용한 인물기반 디지털 사진 앨버밍 방법 및 장치
JP4680161B2 (ja) * 2006-09-28 2011-05-11 富士フイルム株式会社 画像評価装置および方法並びにプログラム
US8031914B2 (en) * 2006-10-11 2011-10-04 Hewlett-Packard Development Company, L.P. Face-based image clustering
JP4671133B2 (ja) * 2007-02-09 2011-04-13 富士フイルム株式会社 画像処理装置
JP4453721B2 (ja) * 2007-06-13 2010-04-21 ソニー株式会社 画像撮影装置及び画像撮影方法、並びにコンピュータ・プログラム
KR101362765B1 (ko) * 2007-11-07 2014-02-13 삼성전자주식회사 촬영 장치 및 그 제어 방법
US9639740B2 (en) * 2007-12-31 2017-05-02 Applied Recognition Inc. Face detection and recognition
JP4945486B2 (ja) * 2008-03-18 2012-06-06 富士フイルム株式会社 画像重要度判定装置、アルバム自動レイアウト装置、プログラム、画像重要度判定方法およびアルバム自動レイアウト方法
CN101388114B (zh) * 2008-09-03 2011-11-23 北京中星微电子有限公司 一种人体姿态估计的方法和系统
JP2010087599A (ja) * 2008-09-29 2010-04-15 Fujifilm Corp 撮像装置、方法およびプログラム
JP2010226558A (ja) * 2009-03-25 2010-10-07 Sony Corp 画像処理装置、画像処理方法、及び、プログラム
US8526684B2 (en) * 2009-12-14 2013-09-03 Microsoft Corporation Flexible image comparison and face matching application
RU2427911C1 (ru) * 2010-02-05 2011-08-27 Фирма "С1 Ко., Лтд." Способ обнаружения лиц на изображении с применением каскада классификаторов
TW201223209A (en) * 2010-11-30 2012-06-01 Inventec Corp Sending a digital image method and apparatus thereof
US9025836B2 (en) * 2011-10-28 2015-05-05 Intellectual Ventures Fund 83 Llc Image recomposition from face detection and facial features
CN102737235B (zh) * 2012-06-28 2014-05-07 中国科学院自动化研究所 基于深度信息和彩色图像的头部姿势估计方法
CN103247074A (zh) * 2013-04-23 2013-08-14 苏州华漫信息服务有限公司 一种结合深度信息与人脸分析技术的3d照相方法
JP2015118522A (ja) * 2013-12-18 2015-06-25 富士フイルム株式会社 アルバム生成装置,アルバム生成方法,アルバム生成プログラムおよびそのプログラムを格納した記録媒体
JP2015162850A (ja) * 2014-02-28 2015-09-07 富士フイルム株式会社 画像合成装置,ならびにその方法,そのプログラム,およびそのプログラムを格納した記録媒体

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009048234A (ja) * 2007-08-13 2009-03-05 Takumi Vision株式会社 顔認識システム及び顔認識方法
CN104601876A (zh) * 2013-10-30 2015-05-06 纬创资通股份有限公司 路人侦测方法与装置
CN104408426A (zh) * 2014-11-27 2015-03-11 小米科技有限责任公司 人脸图像眼镜去除方法及装置
CN104484858A (zh) * 2014-12-31 2015-04-01 小米科技有限责任公司 人物图像处理方法及装置
CN104820675A (zh) * 2015-04-08 2015-08-05 小米科技有限责任公司 相册显示方法及装置
CN104794462A (zh) * 2015-05-11 2015-07-22 北京锤子数码科技有限公司 一种人物图像处理方法及装置

Also Published As

Publication number Publication date
JP2018506755A (ja) 2018-03-08
RU2017102520A (ru) 2018-07-26
EP3173970A1 (en) 2017-05-31
CN105260732A (zh) 2016-01-20
RU2665217C2 (ru) 2018-08-28
US20170154206A1 (en) 2017-06-01
MX2017012839A (es) 2018-01-23
RU2017102520A3 (zh) 2018-07-26

Similar Documents

Publication Publication Date Title
WO2017088266A1 (zh) 图片处理方法及装置
EP3179711B1 (en) Method and apparatus for preventing photograph from being shielded
US9674395B2 (en) Methods and apparatuses for generating photograph
JP6267363B2 (ja) 画像を撮影する方法および装置
US20170034409A1 (en) Method, device, and computer-readable medium for image photographing
US10115019B2 (en) Video categorization method and apparatus, and storage medium
WO2017088470A1 (zh) 图像分类方法及装置
WO2016107030A1 (zh) 通知信息显示方法及装置
WO2016023340A1 (zh) 一种切换摄像头的方法和装置
WO2016112699A1 (zh) 切换显示模式的方法及装置
WO2017071050A1 (zh) 具有触摸屏的终端的防误触方法及装置
WO2017035994A1 (zh) 外接设备的连接方法及装置
US9924090B2 (en) Method and device for acquiring iris image
WO2016023339A1 (zh) 延时拍照的方法和装置
WO2016127671A1 (zh) 图像滤镜生成方法及装置
CN107944367B (zh) 人脸关键点检测方法及装置
US10769743B2 (en) Method, device and non-transitory storage medium for processing clothes information
CN105631803B (zh) 滤镜处理的方法和装置
US10313537B2 (en) Method, apparatus and medium for sharing photo
CN105516586A (zh) 图片拍摄方法、装置及系统
WO2016110146A1 (zh) 移动终端及虚拟按键的处理方法
WO2018098860A1 (zh) 照片合成方法及装置
CN105095868A (zh) 图片匹配方法及装置
WO2016015404A1 (zh) 呼叫转移的方法、装置及终端
US10846513B2 (en) Method, device and storage medium for processing picture

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016566784

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017102520

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15909177

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: MX/A/2017/012839

Country of ref document: MX

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15909177

Country of ref document: EP

Kind code of ref document: A1