WO2017088266A1 - 图片处理方法及装置 - Google Patents
图片处理方法及装置 Download PDFInfo
- Publication number
- WO2017088266A1 WO2017088266A1 PCT/CN2015/099701 CN2015099701W WO2017088266A1 WO 2017088266 A1 WO2017088266 A1 WO 2017088266A1 CN 2015099701 W CN2015099701 W CN 2015099701W WO 2017088266 A1 WO2017088266 A1 WO 2017088266A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- target
- picture
- determining
- preset
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000001815 facial effect Effects 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
Definitions
- the present disclosure relates to the field of image processing technologies, and in particular, to a picture processing method and apparatus.
- the photo album program is one of the most commonly used programs on mobile terminals such as smartphones and tablets.
- the photo album program is used to manage and display pictures in the mobile terminal.
- the album program in the terminal can cluster the faces in the picture, and classify the same or similar faces as one album, thereby forming a face album.
- An embodiment of the present disclosure provides a picture processing method and apparatus, including the following technical solutions:
- a picture processing method including:
- the facial feature information includes at least one of the following:
- the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
- the face feature information when the face feature information includes a location of the face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
- the face feature information when the face feature information includes a location where the face is located in the picture or a face is in the place Depth information in the picture, and when the face is at least two,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the other faces are determined as the target face
- the other faces are determined as non-target faces.
- the face feature information when the face feature information includes a tilt angle of a face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
- the face feature information when the face feature information includes a proportion of a region occupied by a face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
- the face feature information when the face feature information includes the number of times the face appears in all current pictures,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
- the method further includes:
- a picture processing method including:
- a detecting module configured to detect a picture and detect at least two faces included in the picture
- An acquiring module configured to acquire facial feature information of each face detected by the detecting module in the picture
- a determining module configured to determine each face as a target face or a non-target face according to the face feature information acquired by the acquiring module
- a removing module configured to perform preset removal processing on the non-target face determined by the determining module.
- the facial feature information includes at least one of the following:
- the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
- the determining module comprises:
- a first area determining sub-module configured to determine a target shooting area according to a location of each face in the picture and a distribution of the face
- a first determining submodule configured to determine a face in the target shooting area determined by the first area determining submodule as the target face, and determine a face outside the target shooting area as a non-target human face.
- the determination module includes:
- a second area determining sub-module configured to determine a target shooting area according to a location of each face in the picture and a distribution of the face
- a calculation submodule configured to determine a face in the target shooting area as the target face, calculate a distance between another face in the picture and the target face, or calculate the image The difference between the depth information of other faces and the depth information of the target face;
- a second determining submodule configured to determine the other face as a target face when the distance is less than a preset distance or the gap is less than a preset gap
- a third determining submodule configured to determine the other human face as a non-target face when the distance is greater than or equal to a preset distance or the difference is greater than or equal to a preset gap.
- the determining module when the facial feature information includes a tilt angle of a face in the picture, the determining module includes:
- a fourth determining submodule configured to determine the face whose tilt angle is less than the preset angle as the target face
- a fifth determining submodule configured to determine the face whose tilt angle is greater than or equal to the preset angle as a non-target face.
- the determining module when the facial feature information includes a proportion of a region occupied by a face in the image, the determining module includes:
- a sixth determining submodule configured to determine the face whose ratio is greater than a preset ratio as the target face
- the seventh determining submodule is configured to determine the face whose ratio is less than or equal to the preset ratio as a non-target face.
- the determining module when the facial feature information includes the number of times the face appears in all the current pictures, the determining module includes:
- An eighth determining submodule configured to determine the face whose number of times is greater than the preset number of times as the target face
- the ninth determining submodule is configured to determine the face whose number of times is less than or equal to the preset number of times as a non-target face.
- the apparatus further includes:
- the clustering processing module is configured to perform face clustering on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
- the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is clustered.
- a face album is obtained, a non-target face will not appear in the face album, thereby preventing some people who are not related to the user from being placed in the face album, thereby improving the user experience.
- FIG. 1 is a flowchart of a picture processing method according to an exemplary embodiment.
- FIG. 2 is a flowchart of step S103 in a picture processing method according to an exemplary embodiment.
- FIG. 3 is a flowchart of step S103 in another picture processing method according to an exemplary embodiment.
- FIG. 4 is a flowchart of step S103 in still another picture processing method according to an exemplary embodiment.
- FIG. 5 is a flowchart of step S103 in still another image processing method according to an exemplary embodiment.
- FIG. 6 is a flowchart of step S103 in still another image processing method according to an exemplary embodiment.
- FIG. 7 is a flowchart of another image processing method according to an exemplary embodiment.
- FIG. 8 is a block diagram of a picture processing apparatus according to an exemplary embodiment.
- FIG. 9 is a block diagram of a determination module in a picture processing apparatus according to an exemplary embodiment.
- FIG. 10 is a block diagram of a determination module in another picture processing apparatus according to an exemplary embodiment.
- FIG. 11 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
- FIG. 12 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
- FIG. 13 is a block diagram of a determination module in still another picture processing apparatus according to an exemplary embodiment.
- FIG. 14 is a block diagram of another image processing apparatus according to an exemplary embodiment.
- FIG. 15 is a block diagram suitable for a picture processing apparatus, according to an exemplary embodiment.
- the embodiment of the present disclosure provides a picture processing method, which can be used in a terminal device. As shown in FIG. 1, the method includes steps S101-S104:
- step S101 the picture is detected, and at least one face included in the picture is detected;
- step S102 acquiring facial feature information of each face in the picture
- each face is determined as a target face or a non-target face according to the face feature information
- step S104 a preset removal process is performed on the non-target face.
- the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is performed on the face.
- Clustering when a face album is obtained, non-target faces will not appear in the face album, thereby avoiding some people who are not related to the user being placed in the face album, thereby improving the user experience.
- the photographed photo may include a face of a passerby that the user does not want to photograph, that is, a non-target face, in addition to the target face that the user wants to photograph.
- the target face and the non-target face in the picture are determined, and the non-target face is subjected to preset removal processing, so that the face album obtained by the cluster does not include the passerby that the user does not want to shoot. Thereby improving the user experience.
- the facial feature information includes at least one of the following:
- the face feature information may be a position where the face is located in the picture, a tilt angle of the face in the picture, depth information of the face in the picture, and a proportion of the area occupied by the face in the picture. , the number of times a face appears in all current pictures, and so on.
- the target face and the non-target face are determined according to one piece of information or a plurality of pieces of information, so that the determination result has high accuracy.
- the foregoing step S103 may include steps S201-S205:
- step S201 the target shooting area is determined according to the position of each face in the picture and the distribution of the face;
- step S202 the face in the target shooting area is determined as the target face, and the target shooting area is outside The face is determined to be a non-target face.
- the target shooting area may be determined according to the position of the face in the picture and the distribution of the face, for example, determining the center of the picture.
- a certain area of the location is the target area, and the face in the target shooting area can be determined as the target face, and the face outside the target area can be determined as the non-target face.
- step S103 when the face feature information includes a location where a face is located in the picture or depth information of a face in the picture, and the face is at least In two cases, the above step S103 includes steps S301-S304:
- step S301 the target shooting area is determined according to the position of each face in the picture and the distribution of the face;
- step S302 the face in the target shooting area is determined as the target face, the distance between other faces in the picture and the target face is calculated, or the depth information of the other faces in the picture and the target face are calculated.
- step S303 when the distance is less than the preset distance or the difference is less than the preset gap, the other faces are determined as the target faces;
- step S304 when the distance is greater than or equal to the preset distance or the difference is greater than or equal to the preset gap, the other faces are determined as non-target faces.
- the target shooting area may be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture, The face A in the central area of the picture is determined as the target face, and the distance between the other face B and the target face in the picture is calculated. If the distance is smaller than the preset distance, the face B is also determined as the target face.
- the target face set is [A, B], and if there is still a face C in the picture, the distance between the face C and the target face set [A, B] is continuously calculated, if the face C and the target person
- the distance of a face in the face collection [A, B] is less than the preset distance, it is determined as the target face, and so on, and all the faces in the picture are determined as the target face and the non-target face.
- the target shooting area may also be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture,
- the face A in the central area of the picture can be determined as the target face, and the difference between the depth information of other face B and the target face in the picture can be calculated. If the difference is smaller than the preset gap, otherwise, it is determined to be non- Target face, thus improving the accuracy of face determination.
- step S103 when the face feature information includes the tilt angle of the face in the picture, the foregoing step S103 may further include steps S401-S402:
- step S401 a face whose inclination angle is smaller than the preset angle is determined as the target face;
- step S402 a face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
- the face in the picture can be determined according to the tilt angle of the face in the picture, and when the tilt angle is less than the preset angle, it is determined as the target face, when the tilt angle is less than the preset angle , identify it as a non-target face. That is, the orientation of the face in the picture is determined.
- the face feature point localization algorithm can be used to locate the feature points of each face to determine the orientation of each face, and the face of the face facing the lens can be determined as The target face, that is, the face that is oriented in the forward direction is determined as the target face, and if the face of the face exceeds a certain angle, it is determined as a non-target face.
- the foregoing step S103 may further include steps S501-S502:
- step S501 a face whose ratio is greater than a preset ratio is determined as a target face
- step S502 a face whose ratio is less than or equal to a preset ratio is determined as a non-target face.
- the face can also be determined according to the proportion of the area occupied by the face in the picture. For example, if the proportion of the face in the picture is large, it may indicate that it is the main subject, and Determining it as the target face, if the face is small in the picture, it may indicate that it is not the main subject, but the passerby who was accidentally photographed, and can be identified as a non-target person at this time. face.
- step S103 may further include steps S601-S602:
- step S601 the face whose number of times is greater than the preset number of times is determined as the target face
- step S602 the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
- the face appears in all the current pictures if it appears more times, it means that it is the target face, if the number of occurrences is small, for example, only one time, then Explain that it is a passerby, which was accidentally photographed, so as to accurately determine all faces in the picture as the target face and the non-target face.
- the face may be determined as the target face or the non-target face by any two or more face feature information above: for example, when the face feature information includes the location and the person where the face is located in the picture.
- the face determination method corresponding to the above two face feature information may be superimposed and used. For example, according to the position of each face in the picture and the distribution of the face, the target shooting area is determined, and the face in the target shooting area is determined as the target face, and the angle of the face is determined for the face outside the target shooting area.
- a face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face, and a face whose tilt angle is smaller than the preset angle is determined as a non-target face.
- another superposition method may also be adopted, such as determining the target shooting area according to the position of each face in the picture and the distribution of the face, and determining the face with the inclination angle smaller than the preset angle in the target area as the target person. Face, the face with the tilt angle greater than or equal to the preset angle is determined as the non-target face, and the face outside the target area is determined as the non-target face, thereby increasing the accuracy of the face division Sex.
- the superposition usage of the other two or more facial feature information can be performed by referring to the above superposition method.
- the foregoing method further includes step S701:
- step S701 face clustering is performed on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
- the face of the target face may be clustered to obtain a face album corresponding to each target face, and each face is clustered.
- the face album corresponds to a person's face, which is convenient for the user to view.
- FIG. 8 is a block diagram of a picture processing apparatus, which may be implemented as part or all of a terminal device by software, hardware, or a combination of both, according to an exemplary embodiment. As shown in FIG. 8, the image processing apparatus includes:
- the detecting module 81 is configured to detect the picture and detect at least two faces included in the picture;
- the obtaining module 82 is configured to acquire facial feature information of each face detected by the detecting module 81 in the picture;
- the determining module 83 is configured to determine the each face as a target face or a non-target face according to the face feature information acquired by the obtaining module 82;
- the removal module 84 is configured to perform a preset removal process on the non-target face determined by the determination module 83.
- the plurality of faces in the picture are identified, and each face is determined as a target face or a non-target face, and the non-target face is subjected to preset removal processing, so that the face is performed.
- Clustering when a face album is obtained, non-target faces will not appear in the face album, thereby avoiding some people who are not related to the user being placed in the face album, thereby improving the user experience.
- the photographed photo may include a face of a passerby that the user does not want to photograph, that is, a non-target face, in addition to the target face that the user wants to photograph.
- the target face and the non-target face in the picture are determined, and the non-target face is subjected to preset removal processing, so that the face album obtained by the cluster does not include the passerby that the user does not want to shoot. Thereby improving the user experience.
- the facial feature information includes at least one of the following:
- the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
- the face feature information may be a position where the face is located in the picture, a tilt angle of the face in the picture, depth information of the face in the picture, and a proportion of the area occupied by the face in the picture. , the number of times a face appears in all current pictures, and so on.
- the target face and the non-target face are determined according to one piece of information or a plurality of pieces of information, so that the determination result has high accuracy.
- the determining module 83 includes:
- the first area determining sub-module 91 is configured to determine the target photographing area according to the position of each face in the picture and the distribution of the face;
- the first determining sub-module 92 is configured to determine a face in the target shooting area determined by the first determining sub-module as the target face, and to determine a face outside the target shooting area as a non-target human face.
- the target shooting area may be determined according to the position of the face in the picture and the distribution of the face, for example, determining the center of the picture.
- a certain area of the location is the target area, and the face in the target shooting area can be determined as the target face, and the face outside the target area can be determined as the non-target face.
- the determining module 83 includes:
- the second area determining sub-module 101 is configured to determine a target photographing area according to a position of each face in the picture and a face distribution situation;
- the calculation sub-module 102 is configured to determine a face in the target photographing area as the target human face, calculate a distance between other faces in the picture and the target face, or calculate the picture The difference between the depth information of other faces in the face and the depth information of the target face;
- the second determining sub-module 103 is configured to determine the other human face as the target human face when the distance is less than a preset distance or the difference is less than a preset gap;
- the third determining sub-module 104 is configured to determine the other human face as a non-target face when the distance is greater than or equal to a preset distance or the difference is greater than or equal to a preset gap.
- the target shooting area may be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture, The face A in the central area of the picture is determined as the target face, and the distance between the other face B and the target face in the picture is calculated. If the distance is smaller than the preset distance, the face B is also determined as the target face.
- the target face set is [A, B], and if there is still a face C in the picture, the distance between the face C and the target face set [A, B] is continuously calculated, if the face C and the target person
- the distance of a face in the face collection [A, B] is less than the preset distance, it is determined as the target face, and so on, and all the faces in the picture are determined as the target face and the non-target face.
- the target shooting area may also be determined according to the position of each face in the picture and the distribution of the face, for example, the target shooting area is the central area of the picture,
- the face A in the central area of the picture can be determined as the target face, and the difference between the depth information of other face B and the target face in the picture can be calculated. If the difference is smaller than the preset gap, otherwise, it is determined to be non- Target face, thus improving the accuracy of face determination Authenticity.
- the determining module 83 when the facial feature information includes a tilt angle of a face in the picture, the determining module 83 includes:
- the fourth determining sub-module 111 is configured to determine the face whose tilt angle is less than the preset angle as the target human face;
- the fifth determining sub-module 112 is configured to determine the face whose tilt angle is greater than or equal to the preset angle as a non-target face.
- the face in the picture can be determined according to the tilt angle of the face in the picture, and when the tilt angle is less than the preset angle, it is determined as the target face, when the tilt angle is less than the preset angle , identify it as a non-target face. That is, the orientation of the face in the picture is determined.
- the face feature point localization algorithm can be used to locate the feature points of each face to determine the orientation of each face, and the face of the face facing the lens can be determined as The target face, that is, the face that is oriented in the forward direction is determined as the target face, and if the face of the face exceeds a certain angle, it is determined as a non-target face.
- the determining module 83 when the facial feature information includes a proportion of a region occupied by a face in the image, the determining module 83 includes:
- a sixth determining sub-module 121 configured to determine, as the target human face, the face whose ratio is greater than a preset ratio
- the seventh determining sub-module 122 is configured to determine the face whose ratio is less than or equal to the preset ratio as a non-target face.
- the face can also be determined according to the proportion of the area occupied by the face in the picture. For example, if the proportion of the face in the picture is large, it may indicate that it is the main subject, and Determining it as the target face, if the face is small in the picture, it may indicate that it is not the main subject, but the passerby who was accidentally photographed, and can be identified as a non-target person at this time. face.
- the determining module 83 when the facial feature information includes the number of times a face appears in all current pictures, the determining module 83 includes:
- the eighth determining sub-module 131 is configured to determine the face whose number of times is greater than the preset number of times as the target face;
- the ninth determining sub-module 132 is configured to determine the face whose number of times is less than or equal to the preset number of times as a non-target face.
- the face appears in all the current pictures if it appears more times, it means that it is the target face, if it appears less frequently, for example, only once. It means that it is a passerby, it is accidentally photographed, so that all the faces in the picture are accurately determined as the target face and the non-target face.
- the foregoing apparatus further includes:
- the clustering processing module 141 is configured to perform face clustering on the target face to obtain a face album corresponding to the target face, wherein each face album corresponds to a person's face.
- the face of the target face may be clustered to obtain a face album corresponding to each target face, and each face is clustered.
- the face album corresponds to a person’s face, thus Easy for users to view.
- a picture processing apparatus including:
- a memory for storing processor executable instructions
- processor is configured to:
- the above processor can also be configured to:
- the face feature information includes at least one of the following:
- the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
- the above processor can also be configured to:
- Determining each face as a target face or a non-target face according to the face feature information includes:
- a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
- the above processor can also be configured to:
- the face feature information includes a location where the face is located in the picture or depth information of the face in the picture, and the face is at least two
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the other faces are determined as the target face
- the other faces are determined as non-target faces.
- the above processor can also be configured to:
- the face feature information includes a tilt angle of a face in the picture
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
- the above processor can also be configured to:
- the face feature information includes a proportion of a face occupied by the face in the picture
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
- the above processor can also be configured to:
- the face feature information includes the number of times the face appears in all current pictures
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
- the above processor can also be configured to:
- the method further includes:
- FIG. 15 is a block diagram of a picture processing apparatus, which is applicable to a terminal device, according to an exemplary embodiment.
- device 1500 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
- Apparatus 1500 can include one or more of the following components: processing component 1502, memory 1504, power component 1506, multimedia component 1508, audio component 1510, input/output (I/O) interface 1515, sensor component 1514, and communication component 1516 .
- Processing component 1502 typically controls the overall operation of device 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- Processing component 1502 can include one or more processors 1520 to execute instructions to perform all or part of the steps of the above described methods.
- processing component 1502 can include one or more modules to facilitate interaction between component 1502 and other components.
- processing component 1502 can include a multimedia module to facilitate interaction between multimedia component 1508 and processing component 1502.
- Memory 1504 is configured to store various types of data to support operation at device 1500. Examples of such data include instructions for any application or method operating on device 1500, contact data, phone book data, messages, pictures, videos, and the like.
- the memory 1504 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Disk Disk or Optical Disk.
- Power component 1506 provides power to various components of device 1500.
- Power component 1506 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1500.
- Multimedia component 1508 includes a screen between the device 1500 and the user that provides an output interface.
- the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
- the multimedia component 1508 includes a front camera and/or a rear camera. When the device 1500 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 1510 is configured to output and/or input an audio signal.
- the audio component 1510 includes a microphone (MIC) that is configured to receive an external audio signal when the device 1500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 1504 or transmitted via communication component 1516.
- audio component 1510 also includes a speaker for outputting an audio signal.
- the I/O interface 1515 provides an interface between the processing component 1502 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
- Sensor assembly 1514 includes one or more sensors for providing device 1500 with a status assessment of various aspects.
- sensor assembly 1514 can detect an open/closed state of device 1500, relative positioning of components, such as the display and keypad of device 1500, and sensor component 1514 can also detect a change in position of one component of device 1500 or device 1500. The presence or absence of contact by the user with the device 1500, the orientation or acceleration/deceleration of the device 1500 and the temperature change of the device 1500.
- Sensor assembly 1514 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor assembly 1514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 1514 can also include Speed sensor, gyro sensor, magnetic sensor, pressure sensor or temperature sensor.
- Communication component 1516 is configured to facilitate wired or wireless communication between device 1500 and other devices.
- the device 1500 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- communication component 1516 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
- the communication component 1516 also includes a near field communication (NFC) module to facilitate short range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- device 1500 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- non-transitory computer readable storage medium comprising instructions, such as a memory 1504 comprising instructions executable by processor 1520 of apparatus 1500 to perform the above method.
- the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
- a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of the apparatus 1500, to enable the apparatus 1500 to perform the image processing method described above, the method comprising:
- the facial feature information includes at least one of the following:
- the position of the face in the picture, the angle of inclination of the face in the picture, the depth information of the face in the picture, the proportion of the area occupied by the face in the picture, and the face The number of occurrences in all current pictures.
- the face feature information when the face feature information includes a location of the face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- a face in the target shooting area is determined as the target face, and a face outside the target shooting area is determined as a non-target face.
- the face feature information includes a location where the face is located in the picture or depth information of the face in the picture, and the face is at least two
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the other faces are determined as the target face
- the other faces are determined as non-target faces.
- the face feature information when the face feature information includes a tilt angle of a face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose tilt angle is greater than or equal to the preset angle is determined as a non-target face.
- the face feature information when the face feature information includes a proportion of a region occupied by a face in the picture,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose ratio is less than or equal to the preset ratio is determined as a non-target face.
- the face feature information when the face feature information includes the number of times the face appears in all current pictures,
- Determining each face as a target face or a non-target face according to the face feature information includes:
- the face whose number of times is less than or equal to the preset number of times is determined as a non-target face.
- the method further includes:
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (17)
- 一种图片处理方法,其特征在于,包括:对图片进行检测,检测出图片中包含的至少一个人脸;获取每个人脸在所述图片中的人脸特征信息;根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;对所述非目标人脸进行预设去除处理。
- 根据权利要求1所述的方法,其特征在于,所述人脸特征信息包括以下至少一项:人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
- 根据权利要求2所述的方法,其特征在于,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;将所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
- 根据权利要求2所述的方法,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
- 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:将所述倾斜角度小于预设角度的人脸确定为目标人脸;将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
- 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:将所述比例大于预设比例的人脸确定为目标人脸;将所述比例小于或等于预设比例的人脸确定为非目标人脸。
- 根据权利要求2所述的方法,其特征在于,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,所述根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸包括:将所述次数大于预设次数的人脸确定为目标人脸;将所述次数小于或等于预设次数的人脸确定为非目标人脸。
- 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
- 一种图片处理装置,其特征在于,包括:检测模块,用于对图片进行检测,检测出图片中包含的至少一个人脸;获取模块,用于获取所述检测模块检测出的每个人脸在所述图片中的人脸特征信息;确定模块,用于根据所述获取模块获取的所述人脸特征信息将所述每个人脸确定为目标人脸或非目标人脸;去除模块,用于对所述确定模块确定的所述非目标人脸进行预设去除处理。
- 根据权利要求9所述的装置,其特征在于,所述人脸特征信息包括以下至少一项:人脸在所述图片中所处的位置、人脸在所述图片中的倾斜角度、人脸在所述图片中的深度信息、人脸在所述图片中所占区域的比例和人脸在当前所有图片中出现的次数。
- 根据权利要求10所述的装置,其特征在于,所述确定模块包括:第一区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目标拍摄区域;第一确定子模块,用于将所述第一区域确定子模块确定的所述目标拍摄区域中的人脸确定为所述目标人脸,将所述目标拍摄区域外的人脸确定为非目标人脸。
- 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中所处的位置或者人脸在所述图片中的深度信息,且所述人脸为至少两个时,所述确定模块包括:第二区域确定子模块,用于根据每个人脸在图片中所处的位置和人脸分布情况,确定目 标拍摄区域;计算子模块,用于将所述目标拍摄区域中的人脸确定为所述目标人脸,计算所述图片中的其他人脸与所述目标人脸之间的距离,或者计算所述图片中的其他人脸的深度信息与所述目标人脸的深度信息之间的差距;第二确定子模块,用于在所述距离小于预设距离或者所述差距小于预设差距时,将所述其他人脸确定为目标人脸;第三确定子模块,用于在所述距离大于或等于预设距离或者所述差距大于或等于预设差距时,将所述其他人脸确定为非目标人脸。
- 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中的倾斜角度时,所述确定模块包括:第四确定子模块,用于将所述倾斜角度小于预设角度的人脸确定为目标人脸;第五确定子模块,用于将所述倾斜角度大于或等于预设角度的人脸确定为非目标人脸。
- 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在所述图片中所占区域的比例时,所述确定模块包括:第六确定子模块,用于将所述比例大于预设比例的人脸确定为目标人脸;第七确定子模块,用于将所述比例小于或等于预设比例的人脸确定为非目标人脸。
- 根据权利要求10所述的装置,其特征在于,当所述人脸特征信息包括人脸在当前所有图片中出现的次数时,所述确定模块包括:第八确定子模块,用于将所述次数大于预设次数的人脸确定为目标人脸;第九确定子模块,用于将所述次数小于或等于预设次数的人脸确定为非目标人脸。
- 根据权利要求9至15中任一项所述的装置,其特征在于,所述装置还包括:聚类处理模块,用于对所述目标人脸进行人脸聚类,以得到所述目标人脸对应的人脸相册,其中,每个人脸相册对应一个人的人脸。
- 一种图片处理装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:对图片进行检测,检测出图片中包含的至少两个人脸;获取每个人脸在所述图片中的人脸特征信息;根据所述人脸特征信息将每个人脸确定为目标人脸或非目标人脸;对所述非目标人脸进行预设去除处理。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016566784A JP2018506755A (ja) | 2015-11-26 | 2015-12-30 | 画像処理方法、画像処理方法、コンピュータプログラム、及びコンピュータ読み取り可能な記憶媒体 |
MX2017012839A MX2017012839A (es) | 2015-11-26 | 2015-12-30 | Metodo y aparato de procesamiento de imagenes. |
RU2017102520A RU2665217C2 (ru) | 2015-11-26 | 2015-12-30 | Способ и устройство обработки изображений |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510847294.6 | 2015-11-26 | ||
CN201510847294.6A CN105260732A (zh) | 2015-11-26 | 2015-11-26 | 图片处理方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017088266A1 true WO2017088266A1 (zh) | 2017-06-01 |
Family
ID=55100413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099701 WO2017088266A1 (zh) | 2015-11-26 | 2015-12-30 | 图片处理方法及装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20170154206A1 (zh) |
EP (1) | EP3173970A1 (zh) |
JP (1) | JP2018506755A (zh) |
CN (1) | CN105260732A (zh) |
MX (1) | MX2017012839A (zh) |
RU (1) | RU2665217C2 (zh) |
WO (1) | WO2017088266A1 (zh) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105827952B (zh) * | 2016-02-01 | 2019-05-17 | 维沃移动通信有限公司 | 一种去除指定对象的拍照方法及移动终端 |
CN107122356B (zh) * | 2016-02-24 | 2020-10-09 | 北京小米移动软件有限公司 | 显示人脸颜值的方法及装置、电子设备 |
CN105744165A (zh) * | 2016-02-25 | 2016-07-06 | 深圳天珑无线科技有限公司 | 拍照方法、装置及终端 |
CN106453853A (zh) * | 2016-09-22 | 2017-02-22 | 深圳市金立通信设备有限公司 | 一种拍照方法及终端 |
CN106791449B (zh) * | 2017-02-27 | 2020-02-11 | 努比亚技术有限公司 | 照片拍摄方法及装置 |
CN107578006B (zh) * | 2017-08-31 | 2020-06-23 | 维沃移动通信有限公司 | 一种照片处理方法及移动终端 |
CN108875522B (zh) * | 2017-12-21 | 2022-06-10 | 北京旷视科技有限公司 | 人脸聚类方法、装置和系统及存储介质 |
CN108182714B (zh) * | 2018-01-02 | 2023-09-15 | 腾讯科技(深圳)有限公司 | 图像处理方法及装置、存储介质 |
CN110348272B (zh) * | 2018-04-03 | 2024-08-20 | 北京京东尚科信息技术有限公司 | 动态人脸识别的方法、装置、系统和介质 |
CN109034106B (zh) * | 2018-08-15 | 2022-06-10 | 北京小米移动软件有限公司 | 人脸数据清洗方法及装置 |
CN109040588A (zh) * | 2018-08-16 | 2018-12-18 | Oppo广东移动通信有限公司 | 人脸图像的拍照方法、装置、存储介质及终端 |
CN109190539B (zh) * | 2018-08-24 | 2020-07-07 | 阿里巴巴集团控股有限公司 | 人脸识别方法及装置 |
CN109784157B (zh) * | 2018-12-11 | 2021-10-29 | 口碑(上海)信息技术有限公司 | 一种图像处理方法、装置及系统 |
CN110533773A (zh) * | 2019-09-02 | 2019-12-03 | 北京华捷艾米科技有限公司 | 一种三维人脸重建方法、装置及相关设备 |
CN111401315B (zh) * | 2020-04-10 | 2023-08-22 | 浙江大华技术股份有限公司 | 基于视频的人脸识别方法、识别装置及存储装置 |
CN114418865A (zh) * | 2020-10-28 | 2022-04-29 | 北京小米移动软件有限公司 | 图像处理方法、装置、设备及存储介质 |
CN115118866A (zh) * | 2021-03-22 | 2022-09-27 | 深圳市万普拉斯科技有限公司 | 一种图像拍摄方法、装置和智能终端 |
CN114399622A (zh) * | 2022-03-23 | 2022-04-26 | 荣耀终端有限公司 | 图像处理方法和相关装置 |
CN116541550B (zh) * | 2023-07-06 | 2024-07-02 | 广州方图科技有限公司 | 一种自助拍照设备照片分类方法、装置、电子设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009048234A (ja) * | 2007-08-13 | 2009-03-05 | Takumi Vision株式会社 | 顔認識システム及び顔認識方法 |
CN104408426A (zh) * | 2014-11-27 | 2015-03-11 | 小米科技有限责任公司 | 人脸图像眼镜去除方法及装置 |
CN104484858A (zh) * | 2014-12-31 | 2015-04-01 | 小米科技有限责任公司 | 人物图像处理方法及装置 |
CN104601876A (zh) * | 2013-10-30 | 2015-05-06 | 纬创资通股份有限公司 | 路人侦测方法与装置 |
CN104794462A (zh) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | 一种人物图像处理方法及装置 |
CN104820675A (zh) * | 2015-04-08 | 2015-08-05 | 小米科技有限责任公司 | 相册显示方法及装置 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100601997B1 (ko) * | 2004-10-12 | 2006-07-18 | 삼성전자주식회사 | 인물기반 디지털 사진 클러스터링 방법 및 장치와 이를이용한 인물기반 디지털 사진 앨버밍 방법 및 장치 |
JP4680161B2 (ja) * | 2006-09-28 | 2011-05-11 | 富士フイルム株式会社 | 画像評価装置および方法並びにプログラム |
US8031914B2 (en) * | 2006-10-11 | 2011-10-04 | Hewlett-Packard Development Company, L.P. | Face-based image clustering |
JP4671133B2 (ja) * | 2007-02-09 | 2011-04-13 | 富士フイルム株式会社 | 画像処理装置 |
JP4453721B2 (ja) * | 2007-06-13 | 2010-04-21 | ソニー株式会社 | 画像撮影装置及び画像撮影方法、並びにコンピュータ・プログラム |
KR101362765B1 (ko) * | 2007-11-07 | 2014-02-13 | 삼성전자주식회사 | 촬영 장치 및 그 제어 방법 |
US9639740B2 (en) * | 2007-12-31 | 2017-05-02 | Applied Recognition Inc. | Face detection and recognition |
JP4945486B2 (ja) * | 2008-03-18 | 2012-06-06 | 富士フイルム株式会社 | 画像重要度判定装置、アルバム自動レイアウト装置、プログラム、画像重要度判定方法およびアルバム自動レイアウト方法 |
CN101388114B (zh) * | 2008-09-03 | 2011-11-23 | 北京中星微电子有限公司 | 一种人体姿态估计的方法和系统 |
JP2010087599A (ja) * | 2008-09-29 | 2010-04-15 | Fujifilm Corp | 撮像装置、方法およびプログラム |
JP2010226558A (ja) * | 2009-03-25 | 2010-10-07 | Sony Corp | 画像処理装置、画像処理方法、及び、プログラム |
US8526684B2 (en) * | 2009-12-14 | 2013-09-03 | Microsoft Corporation | Flexible image comparison and face matching application |
RU2427911C1 (ru) * | 2010-02-05 | 2011-08-27 | Фирма "С1 Ко., Лтд." | Способ обнаружения лиц на изображении с применением каскада классификаторов |
TW201223209A (en) * | 2010-11-30 | 2012-06-01 | Inventec Corp | Sending a digital image method and apparatus thereof |
US9025836B2 (en) * | 2011-10-28 | 2015-05-05 | Intellectual Ventures Fund 83 Llc | Image recomposition from face detection and facial features |
CN102737235B (zh) * | 2012-06-28 | 2014-05-07 | 中国科学院自动化研究所 | 基于深度信息和彩色图像的头部姿势估计方法 |
CN103247074A (zh) * | 2013-04-23 | 2013-08-14 | 苏州华漫信息服务有限公司 | 一种结合深度信息与人脸分析技术的3d照相方法 |
JP2015118522A (ja) * | 2013-12-18 | 2015-06-25 | 富士フイルム株式会社 | アルバム生成装置,アルバム生成方法,アルバム生成プログラムおよびそのプログラムを格納した記録媒体 |
JP2015162850A (ja) * | 2014-02-28 | 2015-09-07 | 富士フイルム株式会社 | 画像合成装置,ならびにその方法,そのプログラム,およびそのプログラムを格納した記録媒体 |
-
2015
- 2015-11-26 CN CN201510847294.6A patent/CN105260732A/zh active Pending
- 2015-12-30 JP JP2016566784A patent/JP2018506755A/ja active Pending
- 2015-12-30 MX MX2017012839A patent/MX2017012839A/es unknown
- 2015-12-30 WO PCT/CN2015/099701 patent/WO2017088266A1/zh active Application Filing
- 2015-12-30 RU RU2017102520A patent/RU2665217C2/ru active
-
2016
- 2016-09-09 EP EP16188020.8A patent/EP3173970A1/en not_active Withdrawn
- 2016-10-12 US US15/291,652 patent/US20170154206A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009048234A (ja) * | 2007-08-13 | 2009-03-05 | Takumi Vision株式会社 | 顔認識システム及び顔認識方法 |
CN104601876A (zh) * | 2013-10-30 | 2015-05-06 | 纬创资通股份有限公司 | 路人侦测方法与装置 |
CN104408426A (zh) * | 2014-11-27 | 2015-03-11 | 小米科技有限责任公司 | 人脸图像眼镜去除方法及装置 |
CN104484858A (zh) * | 2014-12-31 | 2015-04-01 | 小米科技有限责任公司 | 人物图像处理方法及装置 |
CN104820675A (zh) * | 2015-04-08 | 2015-08-05 | 小米科技有限责任公司 | 相册显示方法及装置 |
CN104794462A (zh) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | 一种人物图像处理方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2018506755A (ja) | 2018-03-08 |
RU2017102520A (ru) | 2018-07-26 |
EP3173970A1 (en) | 2017-05-31 |
CN105260732A (zh) | 2016-01-20 |
RU2665217C2 (ru) | 2018-08-28 |
US20170154206A1 (en) | 2017-06-01 |
MX2017012839A (es) | 2018-01-23 |
RU2017102520A3 (zh) | 2018-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017088266A1 (zh) | 图片处理方法及装置 | |
EP3179711B1 (en) | Method and apparatus for preventing photograph from being shielded | |
US9674395B2 (en) | Methods and apparatuses for generating photograph | |
JP6267363B2 (ja) | 画像を撮影する方法および装置 | |
US20170034409A1 (en) | Method, device, and computer-readable medium for image photographing | |
US10115019B2 (en) | Video categorization method and apparatus, and storage medium | |
WO2017088470A1 (zh) | 图像分类方法及装置 | |
WO2016107030A1 (zh) | 通知信息显示方法及装置 | |
WO2016023340A1 (zh) | 一种切换摄像头的方法和装置 | |
WO2016112699A1 (zh) | 切换显示模式的方法及装置 | |
WO2017071050A1 (zh) | 具有触摸屏的终端的防误触方法及装置 | |
WO2017035994A1 (zh) | 外接设备的连接方法及装置 | |
US9924090B2 (en) | Method and device for acquiring iris image | |
WO2016023339A1 (zh) | 延时拍照的方法和装置 | |
WO2016127671A1 (zh) | 图像滤镜生成方法及装置 | |
CN107944367B (zh) | 人脸关键点检测方法及装置 | |
US10769743B2 (en) | Method, device and non-transitory storage medium for processing clothes information | |
CN105631803B (zh) | 滤镜处理的方法和装置 | |
US10313537B2 (en) | Method, apparatus and medium for sharing photo | |
CN105516586A (zh) | 图片拍摄方法、装置及系统 | |
WO2016110146A1 (zh) | 移动终端及虚拟按键的处理方法 | |
WO2018098860A1 (zh) | 照片合成方法及装置 | |
CN105095868A (zh) | 图片匹配方法及装置 | |
WO2016015404A1 (zh) | 呼叫转移的方法、装置及终端 | |
US10846513B2 (en) | Method, device and storage medium for processing picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016566784 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017102520 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15909177 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2017/012839 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15909177 Country of ref document: EP Kind code of ref document: A1 |