CN105654101A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN105654101A
CN105654101A CN201410642745.8A CN201410642745A CN105654101A CN 105654101 A CN105654101 A CN 105654101A CN 201410642745 A CN201410642745 A CN 201410642745A CN 105654101 A CN105654101 A CN 105654101A
Authority
CN
China
Prior art keywords
image
face similarity
value
group
similarity degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410642745.8A
Other languages
Chinese (zh)
Other versions
CN105654101B (en
Inventor
王科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410642745.8A priority Critical patent/CN105654101B/en
Publication of CN105654101A publication Critical patent/CN105654101A/en
Application granted granted Critical
Publication of CN105654101B publication Critical patent/CN105654101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an information processing method applied to the electronic equipment. The information processing method comprises steps that a first face similarity value between a first image containing the first face information and a preset image containing the preset face information is acquired; the first face similarity value is compared with a first preset face similarity value, and a first comparison result is acquired; when the first face similarity value is smaller than the first preset face similarity value according to the first comparison result, at least one piece of parameter information is acquired; on the basis of the first face similarity value and the at least one piece of parameter information, the first image is determined to belong to a first group; the first image is added to the first group.

Description

A kind of information processing method and electronics
Technical field
The present invention relates to electronic technology field, in particular to a kind of information processing method and electronics.
Background technology
Along with the development of science and technology, more and more electronics having camera function has occurred, is very easy to the life of people, such as: have the mobile phone of camera function, have the panel computer etc. of camera function. Along with more and more photos stores in the electronic device, in order to meet the needs of photo management, electronics provides the photograph album management mode of various ways, as: mobile phone photo album, computer photograph album etc.
In the prior art, mobile phone photo album, to the management of photo, have employed the mode similar to operating system-management file, that is: photo is managed by the base attribute of self according to photo, as: by the attribute such as size, title to photo sequence, grouping etc. If there being the photo newly added, even the photo of same person, photo also only can be left in photograph album by photograph album based on the base attribute of photo.
Present inventor is in the process realizing technical scheme in the embodiment of the present application, it has been found that prior art exists following technical problem:
One, due to photograph album, only photo is left in photograph album by base attribute based on photo, when causing photo to divide into groups, the photo of same people is usually in not identical group, in order to the photo of same people is stored in same group, electronics of the prior art needs to detect and responds the manual operation of user, visible, electronics in the prior art also exists the technical problem automatically can not divided into groups by photo according to the feature of face.
Two, due to photograph album, only photo is left in photograph album by base attribute based on photo, when causing photo to divide into groups, the photo of same people is usually in not identical group, in order to the photo of same people is stored in same group, electronics of the prior art needs to detect and responds the manual operation of user, both complicated, time-consuming again, so also there is the problem that Consumer's Experience is not good enough.
Three, due in order to the photo of same people is stored in same group, need repeated detection and respond the manual operation of user, namely electronics needs the manual operation repeatedly responding user, the power consumption of electronics is caused to increase, finally cause reduce the work-ing life of electronics, visible, electronics of the prior art also exists the technical problem that power consumption is big, work-ing life is short.
Summary of the invention
The embodiment of the present application provides a kind of information processing method and electronics, there is, for solving electronics in the prior art, the technical problem automatically can not divided into groups by photo according to the feature of face, and then realize electronics according to the feature of face automatically by the technique effect of photo grouping.
First aspect according to the application, a kind of information processing method is provided, being applied in electronics, described method specifically comprises: obtain the first face similarity degree value between the first image including the first face information and the pre-set image including default face information; Relatively described the first face similarity degree value and first presets human face similarity angle value, obtains the first comparative result; Described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information; Based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group; Described first image is joined described first group.
Optionally, the first image that described acquisition includes the first face information and include default face information pre-set image between the first face similarity degree value, specifically comprise: the N number of key point information obtaining described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer; Relatively described N number of key point information and described N number of preset critical dot information, obtain N number of key point comparative result; Based on described N number of key point comparative result, obtain the first face similarity degree value between described first image and described pre-set image.
Optionally, described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, perform step: described first image is joined in described first group.
Optionally, when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; When described first judged result is for being, presets human face similarity angle value by described first and adjust the first preset value, obtain first and preset human face similarity degree adjusted value; Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 2nd comparative result; When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; When described first judged result is for being, described the first face similarity degree value is adjusted the first preset value, obtains the first face similarity degree adjusted value; Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 2nd comparative result; When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Optionally, when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; When described first judged result is no, obtain described first storage path information; Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result; When described 2nd judged result is for being, presets human face similarity angle value adjustment the 2nd preset value by described first, obtain first and preset human face similarity degree adjusted value; Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 3rd comparative result; When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; When described first judged result is no, obtain described first storage path information; Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result; When described 2nd judged result is for being, described the first face similarity degree value is adjusted the 2nd preset value, obtains the first face similarity degree adjusted value; Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 3rd comparative result; When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Another aspect according to the application, it is provided that a kind of electronics, specifically comprises: the first obtaining unit, for the first face similarity degree value obtained between the first image including the first face information and the pre-set image including default face information; Relatively unit, presets human face similarity angle value for relatively described the first face similarity degree value and first, obtains the first comparative result; 2nd obtaining unit, for described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information; Determining unit, for based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group; Add unit, for described first image is joined described first group.
Optionally, described first obtaining unit, specifically comprises: key point obtain module, for obtaining N number of key point information of described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer; Key point comparison module, for relatively described N number of key point information and described N number of preset critical dot information, obtains N number of key point comparative result; Similarity obtains module, for based on described N number of key point comparative result, obtaining the first face similarity degree value between described first image and described pre-set image.
Optionally, described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, perform step: described first image is joined in described first group.
Optionally, described determining unit, specifically comprises: first judges module, for when at least one parameter described is specially the image capturing time of described first image, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; First adjusting module, for when described first judged result is for being, presetting human face similarity angle value by described first and adjust the first preset value, obtains first and presets human face similarity degree adjusted value; Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 2nd comparative result; First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, described determining unit, specifically comprises: first judges module, for when at least one parameter described is specially the image capturing time of described first image, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; First adjusting module, for when described first judged result is for being, adjusting the first preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value; Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 2nd comparative result; First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Optionally, described determining unit, specifically comprises:
2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; First acquisition module, for when described first judged result is no, obtaining described first storage path information; 3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result; 2nd adjusting module, for when described 2nd judged result is for being, presetting human face similarity angle value adjustment the 2nd preset value, obtain first and preset human face similarity degree adjusted value by described first; Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 3rd comparative result; 2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, described determining unit, specifically comprise: the 2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result; First acquisition module, for when described first judged result is no, obtaining described first storage path information; 3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result; 2nd adjusting module, for when described 2nd judged result is for being, adjusting the 2nd preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value; Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 3rd comparative result; 2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Above-mentioned one or more technical scheme in the embodiment of the present application, at least has one or more technique effects following and advantage:
One, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, compared in prior art, image can not be divided into groups by mobile phone according to the feature of face, cause electronics can only respond the manual operation of user grouping and carry out the grouping of image, can effectively solve the technical problem also existing in prior art and automatically can not being divided into groups by photo according to the feature of face, and then realize electronics according to the feature of face automatically by the technique effect of photo grouping.
Two, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, compared in prior art, image can not be divided into groups by mobile phone according to the feature of face, cause electronics can only respond the manual operation of user grouping and carry out the grouping of image, complicated operation and time-consuming, effectively solve the problem that Consumer's Experience is not good enough, and then achieve the effect improving Consumer's Experience.
Three, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, the manual operation repeatedly responding user is needed compared to electronics in prior art, the power consumption of electronics is caused to increase, finally cause reduce the work-ing life of electronics, effectively solving electronics of the prior art, to there is power consumption big, the technical problem that work-ing life is short, and then realize reducing electronics power consumption, extend the technique effect in electronics work-ing life.
Accompanying drawing explanation
Fig. 1 is the schema of a kind of information processing method in the embodiment of the present application;
Fig. 2 is the specific implementation schema of step S101 in a kind of information processing method of the embodiment of the present application;
Fig. 3 is the first enforcement mode under the first parameter of step S104 in a kind of information processing method of the embodiment of the present application;
Fig. 4 is the two kind enforcement mode of step S104 under the first parameter in a kind of information processing method of the embodiment of the present application;
Fig. 5 is the first enforcement mode under the 2nd kind of parameter of step S104 in a kind of information processing method of the embodiment of the present application;
Fig. 6 is the two kind enforcement mode of step S104 under the 2nd kind of parameter in a kind of information processing method of the embodiment of the present application;
Fig. 7 is the structure iron of a kind of electronics in the embodiment of the present application.
Embodiment
The embodiment of the present application provides a kind of information processing method and electronics, there is, for solving electronics in the prior art, the technical problem automatically can not divided into groups by photo according to the feature of face, and then realize electronics according to the feature of face automatically by the technique effect of photo grouping.
Technical scheme in the embodiment of the present application is solve above-mentioned technical problem, general thought is as follows: a kind of information processing method, being applied in electronics, described method specifically comprises: obtain the first face similarity degree value between the first image including the first face information and the pre-set image including default face information; Relatively described the first face similarity degree value and first presets human face similarity angle value, obtains the first comparative result; Described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information; Based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group; Described first image is joined described first group.
In above-mentioned technical scheme, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, compared in prior art, image can not be divided into groups by mobile phone according to the feature of face, cause electronics can only respond the manual operation of user grouping and carry out the grouping of image, can effectively solve the technical problem also existing in prior art and automatically can not being divided into groups by photo according to the feature of face, and then realize electronics according to the feature of face automatically by the technique effect of photo grouping.
In order to better understand technique scheme, below in conjunction with Figure of description and concrete enforcement mode, the technical scheme in the embodiment of the present application is described in detail.
Embodiment one
Can know to allow those skilled in the art and intactly understand the technical scheme in the embodiment of the present application, in specific descriptions below, to be that mobile phone is described for example taking electronics, below in conjunction with example, introduce the specific implementation process of method in the embodiment of the present application:
Please refer to Fig. 1, the embodiment one of the application provides the specific implementation process of a kind of information processing method. The method comprises the steps:
S101: obtain the first face similarity degree value between the first image including the first face information and the pre-set image including default face information;
S102: relatively described the first face similarity degree value and first presets human face similarity angle value, obtains the first comparative result;
S103: described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information;
S104: based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group;
S105: described first image is joined described first group.
Further, please refer to Fig. 2, in the process of step S101 specific implementation, specifically comprise:
S201: the N number of key point information obtaining described first image, and obtain N number of preset critical dot information of described default facial image, N be more than or equal to 1 integer;
S202: relatively described N number of key point information and described N number of preset critical dot information, obtains N number of key point comparative result;
S203: based on described N number of key point comparative result, obtains the first face similarity degree value between described first image and described default facial image.
Further, please refer to Fig. 3, after step S102 executes, when the first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, mobile phone directly performs step: joined by described first image in described first group; When the first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, mobile phone can perform step S103: described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information.
Further, due in this application, the parameter information of acquisition has at least following two kinds, respectively:
The first parameter: the image capturing time of the first image.
2nd kind of parameter: the image capturing time of the first image and the first storage path information of the first image.
Certainly this two kinds of parameters also it are not limited to, and mobile phone is when performing step S104, can according to different parameters, perform different methods, realize by two kinds of enforcement modes respectively for often kind of parameter, so the application has at least four kinds of comparatively concrete enforcement modes, but it is not limited only to this four kinds of enforcement modes, below these four kinds of enforcement modes is described in detail:
The first under the first parameter implements mode, and please refer to Fig. 3, to perform step specific as follows:
S401: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
S402: when described first judged result is for being, presets human face similarity angle value by described first and adjusts the first preset value, obtains first and presets human face similarity degree adjusted value;
S403: relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 2nd comparative result;
S404: when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
Under the first parameter the 2nd kind implementing mode, please refer to Fig. 4, to perform step specific as follows:
S501: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
S502: when described first judged result is for being, adjusts the first preset value by described the first face similarity degree value, obtains the first face similarity degree adjusted value;
S503: relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 2nd comparative result;
S504: when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
The first under 2nd kind of parameter implements mode, and please refer to Fig. 5, to perform step specific as follows:
S601: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
S602: when described first judged result is no, obtains described first storage path information;
S603: judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
S604: when described 2nd judged result is for being, presets described first human face similarity angle value adjustment the 2nd preset value, obtains first and preset human face similarity degree adjusted value;
S605: relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 3rd comparative result;
S606: when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
Under 2nd kind of parameter the 2nd kind implementing mode, please refer to Fig. 6, to perform step specific as follows:
S701: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
S702: when described first judged result is no, obtains described first storage path information;
S703: judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
S704: when described 2nd judged result is for being, adjusts the 2nd preset value by described the first face similarity degree value, obtains the first face similarity degree adjusted value;
S705: relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 3rd comparative result;
S706: when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
No matter any by above four kinds of embodiments, the affiliated group of the first image all can be determined by mobile phone, and afterwards, mobile phone performs step S105, is joined by the first image in corresponding group, completes the Auto-grouping to image.
Below by concrete example, the specific implementation process of information processing method in the embodiment of the present application is described:
The mobile phone of user A includes the facial image of user A naming in image category, 6 new images are imported to mobile phone by computer by user, and the specifying information of these 6 images is, the 1st image, it is the front photograph of user A, reflects the facial feature of user A clearly; 2nd image is also that the front of user is shone, it is also possible to reflect the facial feature of user A clearly; 3rd image, is the side face photograph of user, can not reflect the facial feature of user A clearly; 4th image is the front photograph of user, but the problem due to exposure angle, the facial feature of user A can not be reflected clearly; 5th image, it does not have comprise user A is that the front of user B is shone; 6th image, it does not have the feature of face is that a landscape shines; Simultaneously, these 6 images concrete storage attribute in a computer is: under the 1st image, the 2nd image, the 3rd image and the 4th are stored in first folder, 5th image storage is under the 2nd folder, 6th image storage is under the 3rd folder, and the time of the 1st image, the 2nd image, the 3rd image and the 5th image taking is on the same day, and the 4th image taking is that the time is different from above-mentioned 4 images.
After mobile phone detects these 6 images, first these 6 images are put in the unnamed image category of mobile phone, and start to perform S101: obtain the first face similarity degree value between the first image including the first face information and the pre-set image including default face information, continue example above, owing to the 6th image is that landscape shines, do not comprise face information, so the 6th image has directly been placed in the landscape group in unnamed image category by mobile phone, and the 1st image to the 5th image owing to all including the feature of face, so, mobile phone will perform step S201, S202 and S203.
First step S201 is performed: the N number of key point information obtaining described first image, and obtain N number of preset critical dot information of described default facial image, N be more than or equal to 1 integer, continue example above specifically, assume that key point information and preset critical dot information are all 4, respectively: the size of the distance of two, the size of nose, face, the height of two ears. Now, mobile phone obtains the corresponding key point information of the 1st image to the 5th image, in table one:
Table one:
It should be noted that, data in bracket are the real data in image, the data not having bracket are the data that mobile phone detects, the side face being user A due to the 3rd image shines, and the exposure of the 4th image is not good, have impact on mobile phone to the acquisition of key point, so in the 3rd image and the 4th image, the numerical value and the actual numerical value that detect have operative gap. Then, 4 preset critical dot informations that mobile phone obtains the user's A face in pre-set image are specially: the distance of two 6 centimetres, the size of nose 4 square centimeters, the size of face 3 square centimeters, the height of two ears 5 centimetres.
After execution of step S201, perform step S202: relatively described N number of key point information and described N number of preset critical dot information, obtain N number of key point comparative result, continue example above specifically, the key point information of user A in 1st image to 4 key point information and pre-set image of the 5th image is contrasted, obtains table two:
Table two:
Two similarities Nose similarity Face similarity Two ear similarities
1st image 9.8 9.7 10 9.8
2nd image 9.0 9.0 9.0 9.0
3rd image 8.6 6.5 5.3 9.6
4th image 10 6.0 5.0 9.2
5th image 8.8 6.2 0 7.6
It should be noted that, the similarity obtained in table two obtains according to comparatively complicated formulae discovery, repeats no more here.
After execution of step S202, perform step S203: based on described N number of key point comparative result, obtain the first face similarity degree value between described first image and described pre-set image, assume that the first face similarity degree of the calculating in the embodiment of the present application is worth formula to be weights 1* key point 1 similarity+weights 2* key point 2 similarities+weights 3* key point 3 similarities+weights 4* key point similarity, assume that weights 1 are all 0.25 to weights 4, so according to above-mentioned formula, the first face similarity degree value of the 1st image is 0.25*9.8+0.25*9.7+0.25*10+0.25*9.8=9.825, the first face similarity degree value of the 2nd image is 0.25*9+0.25*9+0.25*9+0.25*9=9, the first face similarity degree value of the 3rd image is 0.25*10+0.25*6.5+0.25*5.3+0.25*9.6=7.5, the first face similarity degree value of the 4th image is 0.25*10+0.25*6.0+0.25*5+0.25*9.2=7.55, the first face similarity degree value of the 5th image is 0.25*8.8+0.25*6.2+0.25*0+0.25*7.6=5.65.
After execution of step S203, perform step S102: relatively described the first face similarity degree value and first presets human face similarity angle value, obtain the first comparative result, continue example above specifically, assuming in the embodiment of the present application first, to preset human face similarity angle value be 9, so the 1st image to the first face similarity degree value of the 5th image and the comparative result of the first default human face similarity angle value in table three:
Table three:
The first face similarity degree First presets human face similarity degree First comparative result
1st image 9.825 9.0 It is greater than
2nd image 9.0 9.0 Equal
3rd image 7.5 9.0 It is less than
4th image 7.55 9.0 It is less than
5th image 5.65 9.0 It is less than
When first compare for the first face similarity degree value be more than or equal to first preset human face similarity angle value time, perform step: joined by described first image in described first group, continue example above, owing to the first comparative result of the 1st image and the 2nd image meets condition, so, the 1st image and the 2nd image have been placed in the first group in unnamed image category by mobile phone.
When the first comparative result be the first face similarity degree value be less than first preset human face similarity angle value time, mobile phone perform step S103: described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information, continue example above specifically, in this application, the parameter information obtained has following two kinds, is certainly not limited only to these two kinds, respectively:
Obtain the image capturing time of described first image; Or
Obtain the image capturing time of described first image and the first storage path information of the first image.
For these two kinds of parameters, mobile phone has two kinds of enforcement modes respectively in concrete implementation process, and specifically, the first under the first parameter implements mode:
First step S401 is performed: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result, continue example above specifically, due to the 3rd image, first comparative result of the 4th image and the 5th image does not meet condition, so, these three images also do not complete grouping, first the shooting time of these three images is obtained, owing to the 3rd image is identical with the 5th image capturing time, and the shooting time of the 4th image is different from above-mentioned two images, the shooting time assuming the 3rd image and the 5th image is on January 1st, 2014, the shooting time of the 4th image is on February 7th, 2014, and the 1st image in the first group and the shooting time of the 2nd image and the shooting time of the 3rd image and the 5th image are on the same day, it it is all on January 1st, 2014, the default time conditions that so now mobile phone obtains can be on January 1st, 2014 for shooting time, it can also be a scope, if on January 1st, 2014 was to January 31, mobile phone judges the 3rd image is to whether the shooting time of the 5th image meets default time conditions, so, first judged result is specially: the 3rd image meets default time conditions, 4th image does not meet default time conditions, 5th image meets default time conditions.
After execution of step S401, perform step S402: when described first judged result is for being, preset human face similarity angle value by described first and adjust the first preset value, obtain first and preset human face similarity degree adjusted value, continue example above specifically, owing to the first judged result of the 3rd image and the 5th image is all yes, mobile phone is preset human face similarity angle value by the first of pre-set image and is reduced by 30%, obtain first and preset human face similarity angle value adjusted value, certainly reduction by 30% also it is not limited to, in concrete implementation process, can according to the tolerance range of algorithm, first preset value is made suitable value, now, first presets human face similarity degree adjusted value is specially 6.3.
After execution of step S402, perform step S403: relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtain the 2nd comparative result, continue example above specifically, mobile phone response of step S403, relatively the first face similarity degree and first of the 3rd image and the 5th image presets human face similarity degree adjusted value, owing to the first face similarity degree value of the 3rd image is 7.5, the first face similarity degree value of the 5th image is 5.65, so, 2nd comparative result of the 3rd image is yes, 2nd comparative result of the 5th image is no.
At execution of step S403, perform step S404: when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value, continue example above specifically, owing to the 2nd comparative result of the 3rd image is yes, the result of the image of the 5th is no, so the affiliated group of the 3rd image is defined as the first group by mobile phone, and at this moment, determine the 1st in the first group image, the first face similarity degree of the 2nd image and the 3rd image is all greater than 6.3.
Under the first parameter the 2nd kind implements mode:
First step S501 is performed: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result, continue aforesaid example, specifically, comparing the 3rd image, when first judged result of the 4th image and the 5th image is no, 3rd image, first comparative result of the 4th image and the 5th image does not meet condition, so, these three images also do not complete grouping, first the shooting time of these three images is obtained, owing to the 3rd image is identical with the 5th image capturing time, and the shooting time of the 4th image is different from above-mentioned two images, the shooting time assuming the 3rd image and the 5th image is on January 1st, 2014, the shooting time of the 4th image is on February 7th, 2014, and the 1st image in the first group and the shooting time of the 2nd image and the shooting time of the 3rd image and the 5th image are on the same day, be all the default time conditions that on January 1st, 2014, so now mobile phone obtained it can be on January 1st, 2014 for shooting time, it can also be a scope, if on January 1st, 2014 was to January 31, mobile phone judges the 3rd image is to whether the shooting time of the 5th image meets default time conditions, first judged result is specially the 3rd image and meets default time conditions, 4th image does not meet default time conditions, 5th image meets default time conditions.
After step S501 completes, continue to perform step S502: when described first judged result is for being, described the first face similarity degree value is adjusted the first preset value, obtain the first face similarity degree adjusted value, continue example above specifically, owing to the first judged result of the 3rd image and the 5th image is all yes, the first face similarity degree value of the 3rd image and the 5th image is all improved 30% by mobile phone respectively, obtain the first face similarity degree adjusted value of the 3rd image and the first face similarity degree adjusted value of the 5th image, certainly raising 30% also it is not limited to, in concrete implementation process, can according to the tolerance range of algorithm, first preset value is made suitable value, now, the first face similarity degree adjusted value of the 3rd image is 9.75, the first face similarity degree adjusted value of the 5th image is 7.345, first default human face similarity angle value is 9.0.
After execution of step S502, perform step S503: relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtain the 2nd comparative result, continue example above specifically, mobile phone response of step S503, relatively the first face similarity degree adjusted value and first of the 3rd image and the 5th image presets human face similarity angle value, and the 2nd comparative result obtaining the 3rd image is yes, and the 2nd comparative result of the 5th image is no.
After completing steps S503, perform step S504: when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value, continue example above specifically, owing to the 2nd comparative result of the 3rd image is yes, the result of the image of the 5th is no, so the affiliated group of the 3rd image is defined as the first group by mobile phone, and now the first face similarity degree value of the 1st image is 9.875, the first face similarity degree value of the 2nd image is 9.0, the first face similarity degree value of the 3rd image is 9.75, all it is more than or equal to first and presets human face similarity angle value 9.
Certainly under the 2nd kind of parameter, mobile phone, when performing step S104, also has two kinds of concrete enforcement modes, in conjunction with above-mentioned example, continues to describe.
The first under 2nd kind of parameter implements mode:
First step S601 is performed: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result, continue example above specifically, comparing the 3rd image, when first judged result of the 4th image and the 5th image is no, 3rd image, first comparative result of the 4th image and the 5th image does not meet condition, so, these three images also do not complete grouping, first the shooting time of these three images is obtained, owing to the 3rd image is identical with the 5th image capturing time, and the shooting time of the 4th image is different from above-mentioned two images, the shooting time assuming the 3rd image and the 5th image is on January 1st, 2014, the shooting time of the 4th image is on February 7th, 2014, and the 1st image in the first group and the shooting time of the 2nd image and the shooting time of the 3rd image and the 5th image are on the same day, be all the default time conditions that on January 1st, 2014, so now mobile phone obtained it can be on January 1st, 2014 for shooting time, it can also be a scope, if on January 1st, 2014 was to January 31, mobile phone judges the 3rd image is to whether the shooting time of the 5th image meets default time conditions, first judged result is specially the 3rd image and meets default time conditions, 4th image does not meet default time conditions, 5th image meets default time conditions.
After execution of step S601, perform step S602: when described first judged result is no, obtain described first storage path information, continue example above specifically, owing to the 4th image does not meet default time conditions, now, mobile phone can obtain the storage address of the 4th image storage when computer, and storing address as the aforementioned is first folder.
After execution of step S602, perform step S603: judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result, continue example above specifically, obtain after the 4th image storage storage address in a computer is first folder at mobile phone, mobile phone obtains the 1st in the first group image and the 2nd image is first folder in the storage position of calculating machine, so the first storage path information of the 4th image meets the default storage path information of the correspondence of the first group, so the 2nd judged result is yes.
After execution of step S603, perform step S604: when described 2nd judged result is for being, human face similarity angle value adjustment the 2nd preset value is preset by described first, obtain first and preset human face similarity degree adjusted value, continue example above specifically, owing to the 2nd judged result of the 4th image is yes, so mobile phone response of step S604, preset human face similarity angle value by first and reduce by 20%, obtain first and preset human face similarity degree adjusted value, certainly reduction by 20% it is not limited only to, in concrete implementation process, can according to the tolerance range of algorithm, 2nd preset value is made suitable value, now, first default human face similarity degree adjusted value is 7.2.
After execution of step S604, perform step S605: relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtain the 3rd comparative result, continue example above specifically, mobile phone is after obtaining first and presetting human face similarity degree adjusted value, relatively whether the first face similarity degree value of the 4th image is more than or equal to the first default human face similarity degree adjusted value, owing to the human face similarity angle value of the 4th image is 7.55, and the first default human face similarity degree adjusted value is 7.2, so, 3rd comparative result of the 4th image is yes.
After execution of step S605, perform step S606: when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value, continue example above specifically, owing to the 3rd comparative result of the 4th image is yes, so the affiliated group of the 4th image is just defined as the first group, and now, 1st image, the first face similarity degree value of the 2nd image and the 4th image is all greater than the first face similarity degree adjusted value 7.2.
Under 2nd kind of parameter the 2nd kind implements mode:
First step S701 is performed: judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result, continue example above specifically, comparing the 3rd image, when first judged result of the 4th image and the 5th image is no, 3rd image, first comparative result of the 4th image and the 5th image does not meet condition, described, these three images also do not complete grouping, first the shooting time of these three images is obtained, owing to the 3rd image is identical with the 5th image capturing time, and the shooting time of the 4th image is different from above-mentioned two images, the shooting time assuming the 3rd image and the 5th image is on January 1st, 2014, the shooting time of the 4th image is on February 7th, 2014, and the 1st image in the first group and the shooting time of the 2nd image and the shooting time of the 3rd image and the 5th image are on the same day, be all the default time conditions that on January 1st, 2014, so now mobile phone obtained it can be on January 1st, 2014 for shooting time, it can also be a scope, if on January 1st, 2014 was to January 31, mobile phone judges the 3rd image is to whether the shooting time of the 4th image meets default time conditions, first judged result is specially the 3rd image and meets default time conditions, 4th image does not meet default time conditions, 5th image meets default time conditions.
After step S701 completes, continue completing steps S702: when described first judged result is no, obtain described first storage path information, continue example above specifically, owing to the 4th image does not meet default time conditions, now, mobile phone can obtain the storage address of the 5th image storage when computer, and storing address as the aforementioned is first folder.
After step S702 completes, continue completing steps S703: judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result, continue example above specifically, obtain after the 4th image storage storage address in a computer is first folder at mobile phone, mobile phone obtains the 1st in the first group image and the 2nd image is first folder in the storage position of calculating machine, so the first storage path information of the 4th image meets the default storage path information of the correspondence of the first group, so the 2nd judged result is yes.
After performing step S703, perform step S704: when described 2nd judged result is for being, described the first face similarity degree value is adjusted the 2nd preset value, obtain the first face similarity degree adjusted value, continue example above specifically, owing to the 2nd judged result of the 4th image is yes, so mobile phone response of step S704, the first face similarity degree value of the 4th image is improved 20%, certainly raising 20% it is not limited only to, in concrete implementation process, can according to the tolerance range of algorithm, 2nd preset value is made suitable value, now, the first face similarity degree adjusted value of the 4th image is 9.06, first default human face similarity angle value is 9.0.
After execution of step S704, perform step S705: relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtain the 3rd comparative result, continue example above specifically, mobile phone is after the first face similarity degree adjusted value obtaining the 4th image, relatively whether the first face similarity degree adjusted value of the 4th image is more than or equal to the first default human face similarity angle value, owing to the first default human face similarity angle value is 9, and the first default human face similarity degree adjusted value is 9.06, so, 3rd comparative result of the 4th image is yes.
After execution of step S705, perform step S706: when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value, continue example above specifically, owing to the 3rd comparative result of the 4th image is, so the affiliated group of the 4th image is just defined as the first group, and now, the first face similarity degree value of the 1st image is 9.825, the first face similarity degree value of the 2nd image is 9.0, the first face similarity degree adjusted value of the 4th image is 9.06, all it is more than or equal to first and presets human face similarity angle value 9.0.
No matter pass through which kind of mode, mobile phone all can by the difference of parameter information, change concrete enforcement mode, by parameter information to the parts of images first time coarse problem of recognition of face caused because of light or other factors after first time recognition of face, carry out second time recognition of face, thus redefine the group of other image, so, no matter it is by which kind of mode, mobile phone is after execution of step S104, perform step S105: described first image is joined described first group, in conjunction with above-mentioned example, if when the first parameter, just by the 1st image, 2nd image and the 3rd image are defined as the first group, and these three images are joined the first group kind, if in the 2nd when parameter, just the 1st image, the 2nd image, the 4th image are joined in the first group, certain those skilled in the art can easily in conjunction with the embodiment under these two kinds of parameters, thus will determine that the 1st image, the 2nd image, the 3rd image and the 4th image are defined as the first group, and joining in the first group by these 4 images, concrete process has just repeated no more. embodiment two
Please refer to Fig. 7, in the embodiment of the present application, also provide a kind of electronics based on same invention design, specifically comprise:
First obtaining unit 10, for the first face similarity degree value obtained between the first image including the first face information and the pre-set image including default face information;
Relatively unit 20, presets human face similarity angle value for relatively described the first face similarity degree value and first, obtains the first comparative result;
2nd obtaining unit 30, for described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information;
Determining unit 40, for based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group;
Add unit 50, for described first image is joined described first group.
Optionally, the first obtaining unit 10, specifically comprises:
Key point obtain module, for obtaining N number of key point information of described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer;
Key point comparison module, for relatively described N number of key point information and described N number of preset critical dot information, obtains N number of key point comparative result;
Similarity obtains module, for based on described N number of key point comparative result, obtaining the first face similarity degree value between described first image and described pre-set image.
Optionally, described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, perform step: described first image is joined in described first group.
Optionally, described determining unit 40, specifically comprises:
First judges module, for when at least one parameter described is specially the image capturing time of described first image, judging whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
First adjusting module, for when described first judged result is for being, presetting human face similarity angle value by described first and adjust the first preset value, obtains first and presets human face similarity degree adjusted value;
Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 2nd comparative result;
First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, described determining unit 40, specifically comprises:
First judges module, for when at least one parameter described is specially the image capturing time of described first image, judging whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
First adjusting module, for when described first judged result is for being, adjusting the first preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value;
Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 2nd comparative result;
First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Optionally, described determining unit 40, specifically comprises:
2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
First acquisition module, for when described first judged result is no, obtaining described first storage path information;
3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
2nd adjusting module, for when described 2nd judged result is for being, presetting human face similarity angle value adjustment the 2nd preset value, obtain first and preset human face similarity degree adjusted value by described first;
Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 3rd comparative result;
2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, described determining unit 40, specifically comprises:
2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
First acquisition module, for when described first judged result is no, obtaining described first storage path information;
3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
2nd adjusting module, for when described 2nd judged result is for being, adjusting the 2nd preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value;
Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 3rd comparative result;
2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Due to above-mentioned electronics and aforesaid a kind of information processing method one_to_one corresponding, so just repeating no more at this.
By the one or more technical schemes in the embodiment of the present application, it is possible to realize following one or more technique effect:
One, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, compared in prior art, image can not be divided into groups by mobile phone according to the feature of face, cause electronics can only respond the manual operation of user grouping and carry out the grouping of image, can effectively solve the technical problem also existing in prior art and automatically can not being divided into groups by photo according to the feature of face, and then realize electronics according to the feature of face automatically by the technique effect of photo grouping.
Two, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, compared in prior art, image can not be divided into groups by mobile phone according to the feature of face, cause electronics can only respond the manual operation of user grouping and carry out the grouping of image, complicated operation and time-consuming, effectively solve the problem that Consumer's Experience is not good enough, and then achieve the effect improving Consumer's Experience.
Three, due to the technical scheme in the embodiment of the present application, first adopt recognition of face that the image of the new importing of user carries out dividing into groups for the first time screening, adopt the mode that many condition judges afterwards, again the image not divided into groups is carried out postsearch screening, the manual operation repeatedly responding user is needed compared to electronics in prior art, the power consumption of electronics is caused to increase, finally cause reduce the work-ing life of electronics, effectively solving electronics of the prior art, to there is power consumption big, the technical problem that work-ing life is short, and then realize reducing electronics power consumption, extend the technique effect in electronics work-ing life.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program. Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect. And, the present invention can adopt the form at one or more upper computer program implemented of computer-usable storage medium (including but not limited to multiple head unit, CD-ROM, optical memory etc.) wherein including computer usable program code.
The present invention is that schema and/or skeleton diagram with reference to method according to embodiments of the present invention, equipment (system) and computer program describe. Should understand can by the combination of the flow process in each flow process in computer program instructions flowchart and/or skeleton diagram and/or square frame and schema and/or skeleton diagram and/or square frame. These computer program instructions can be provided to the treater of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine so that the instruction performed by the treater of computer or other programmable data processing device is produced for realizing the device of function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
These computer program instructions also can be stored in and can guide in computer-readable memory that computer or other programmable data processing device work in a specific way, making the instruction that is stored in this computer-readable memory produce the manufacture comprising instruction device, this instruction device realizes the function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform a series of operation steps to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for realizing the step of the function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
Specifically, the computer program instructions that information processing method in the embodiment of the present application is corresponding can be stored in CD, hard disk, on the storage medias such as USB flash disk, read by an electronics when the computer program instructions corresponding with information processing method in storage media or when being performed, comprise the steps: the first face similarity degree value obtaining between the first image including the first face information and the pre-set image including default face information; Relatively described the first face similarity degree value and first presets human face similarity angle value, obtains the first comparative result; Described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information; Based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group; Described first image is joined described first group.
Optionally, in described storage media store and step: from power without described in, corresponding computer instruction is specifically being performed in process, specifically comprise the steps optional, in described storage media store and step: obtain include the first face information the first image and include default face information pre-set image between the first face similarity degree value, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Obtain N number of key point information of described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer;
Relatively described N number of key point information and described N number of preset critical dot information, obtain N number of key point comparative result;
Based on described N number of key point comparative result, obtain the first face similarity degree value between described first image and described pre-set image.
Optionally, in described storage media store and step: described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, corresponding computer instruction is specifically being performed in process, specifically comprises the steps: to join in described first group described first image.
Optionally, in described storage media store and step: when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is for being, presets human face similarity angle value by described first and adjust the first preset value, obtain first and preset human face similarity degree adjusted value;
Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 2nd comparative result;
When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, in described storage media store and step: when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is for being, described the first face similarity degree value is adjusted the first preset value, obtains the first face similarity degree adjusted value;
Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 2nd comparative result;
When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Optionally, that store in described storage media and step: when at least one parameter described is specially the first storage path information of the image capturing time of described first image and described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is no, obtain described first storage path information;
Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result;
When described 2nd judged result is for being, presets human face similarity angle value adjustment the 2nd preset value by described first, obtain first and preset human face similarity degree adjusted value;
Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 3rd comparative result;
When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
Optionally, that store in described storage media and step: when at least one parameter described is specially the first storage path information of the image capturing time of described first image and described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, corresponding computer instruction, being specifically performed in process, specifically comprises the steps:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is no, obtain described first storage path information;
Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result;
When described 2nd judged result is for being, described the first face similarity degree value is adjusted the 2nd preset value, obtains the first face similarity degree adjusted value;
Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 3rd comparative result;
When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
Although having described the preferred embodiments of the present invention, but those skilled in the art once the substantially creative concept of cicada, then these embodiments can be made other change and amendment. Therefore, it is intended that the appended claims shall be construed comprise preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, the present invention can be carried out various change and modification and not depart from the spirit and scope of the present invention by the technician of this area. Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these change and modification.

Claims (14)

1. an information processing method, is applied in electronics, and described method comprises:
Obtain the first face similarity degree value between the first image including the first face information and the pre-set image including default face information;
Relatively described the first face similarity degree value and first presets human face similarity angle value, obtains the first comparative result;
Described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information;
Based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group;
Described first image is joined described first group.
2. the method for claim 1, it is characterised in that, the first image that described acquisition includes the first face information and include default face information pre-set image between the first face similarity degree value, specifically comprise:
Obtain N number of key point information of described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer;
Relatively described N number of key point information and described N number of preset critical dot information, obtain N number of key point comparative result;
Based on described N number of key point comparative result, obtain the first face similarity degree value between described first image and described pre-set image.
3. method as claimed in claim 2, it is characterised in that, described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, perform step: joined in described first group by described first image.
4. method as described in claim as arbitrary in claim 1-3, it is characterized in that, when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is for being, presets human face similarity angle value by described first and adjust the first preset value, obtain first and preset human face similarity degree adjusted value;
Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 2nd comparative result;
When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
5. method as described in claim as arbitrary in claim 1-3, it is characterized in that, when at least one parameter described is specially the image capturing time of described first image, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is for being, described the first face similarity degree value is adjusted the first preset value, obtains the first face similarity degree adjusted value;
Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 2nd comparative result;
When described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
6. method as described in claim arbitrary in claim 1-3, it is characterized in that, when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is no, obtain described first storage path information;
Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result;
When described 2nd judged result is for being, presets human face similarity angle value adjustment the 2nd preset value by described first, obtain first and preset human face similarity degree adjusted value;
Relatively whether described the first face similarity degree value is more than or equal to the described first default human face similarity degree adjusted value, obtains the 3rd comparative result;
When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
7. method as described in claim arbitrary in claim 1-3, it is characterized in that, when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, described based on described the first face similarity degree value and at least one parameter information described, determine that group belonging to described first image is the first group, specifically comprise:
Judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
When described first judged result is no, obtain described first storage path information;
Judge whether described first storage path information meets the default storage path condition corresponding with described first group, obtain the 2nd judged result;
When described 2nd judged result is for being, described the first face similarity degree value is adjusted the 2nd preset value, obtains the first face similarity degree adjusted value;
Relatively whether described the first face similarity degree adjusted value is more than or equal to the described first default human face similarity angle value, obtains the 3rd comparative result;
When described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
8. an electronics, described electronics comprises:
First obtaining unit, for the first face similarity degree value obtained between the first image including the first face information and the pre-set image including default face information;
Relatively unit, presets human face similarity angle value for relatively described the first face similarity degree value and first, obtains the first comparative result;
2nd obtaining unit, for described first comparative result be described the first face similarity degree value be less than described first preset human face similarity angle value time, obtain at least one parameter information;
Determining unit, for based on described the first face similarity degree value and at least one parameter information described, it is determined that group belonging to described first image is the first group;
Add unit, for described first image is joined described first group.
9. electronics as claimed in claim 8, it is characterised in that, described first obtaining unit, specifically comprises:
Key point obtain module, for obtaining N number of key point information of described first image, and obtain N number of preset critical dot information of described pre-set image, N be more than or equal to 1 integer;
Key point comparison module, for relatively described N number of key point information and described N number of preset critical dot information, obtains N number of key point comparative result;
Similarity obtains module, for based on described N number of key point comparative result, obtaining the first face similarity degree value between described first image and described pre-set image.
10. electronics as claimed in claim 9, it is characterized in that, described first comparative result be described the first face similarity degree value be more than or equal to described first preset human face similarity angle value time, perform step: described first image is joined in described first group.
11. electronicss as described in claim arbitrary in claim 8-10, it is characterised in that, described determining unit, specifically comprises:
First judges module, for when at least one parameter described is specially the image capturing time of described first image, judging whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
First adjusting module, for when described first judged result is for being, presetting human face similarity angle value by described first and adjust the first preset value, obtains first and presets human face similarity degree adjusted value;
Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 2nd comparative result;
First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described first preset value adjusting described first and presets human face similarity degree adjusted value.
12. electronicss as described in claim arbitrary in claim 8-10, it is characterised in that, described determining unit, specifically comprises:
First judges module, for when at least one parameter described is specially the image capturing time of described first image, judging whether described image capturing time meets the default time conditions corresponding with described first group, obtains the first judged result;
First adjusting module, for when described first judged result is for being, adjusting the first preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value;
Whether the first comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 2nd comparative result;
First determination module, for when described 2nd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described first preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
13. electronicss as described in claim arbitrary in claim 8-10, it is characterised in that, described determining unit, specifically comprises:
2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
First acquisition module, for when described first judged result is no, obtaining described first storage path information;
3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
2nd adjusting module, for when described 2nd judged result is for being, presetting human face similarity angle value adjustment the 2nd preset value, obtain first and preset human face similarity degree adjusted value by described first;
Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree value and preset human face similarity degree adjusted value, obtain the 3rd comparative result;
2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity angle value of the first group image comprised in described first group is all more than or equal to carry out based on described 2nd preset value adjusting described first and presets human face similarity degree adjusted value.
14. electronicss as described in claim arbitrary in claim 8-10, it is characterised in that, described determining unit, specifically comprises:
2nd judges module, for when at least one parameter described is specially the image capturing time of described first image and the first of described first image stores path information, judge whether described image capturing time meets the default time conditions corresponding with described first group, obtain the first judged result;
First acquisition module, for when described first judged result is no, obtaining described first storage path information;
3rd judges module, for judging whether described first storage path information meets the default storage path condition corresponding with described first group, obtains the 2nd judged result;
2nd adjusting module, for when described 2nd judged result is for being, adjusting the 2nd preset value by described the first face similarity degree value, obtain the first face similarity degree adjusted value;
Whether the 2nd comparison module, be more than or equal to described first for relatively described the first face similarity degree adjusted value and preset human face similarity angle value, obtain the 3rd comparative result;
2nd determination module, for when described 3rd comparative result is for being, determine that described first image belongs to described first group, wherein, the human face similarity degree adjusted value carrying out adjusting based on described 2nd preset value of the first group image comprised in described first group is all more than or equal to described first and presets human face similarity angle value.
CN201410642745.8A 2014-11-11 2014-11-11 A kind of information processing method and electronic equipment Active CN105654101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410642745.8A CN105654101B (en) 2014-11-11 2014-11-11 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410642745.8A CN105654101B (en) 2014-11-11 2014-11-11 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105654101A true CN105654101A (en) 2016-06-08
CN105654101B CN105654101B (en) 2019-04-26

Family

ID=56478837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410642745.8A Active CN105654101B (en) 2014-11-11 2014-11-11 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105654101B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341605A (en) * 2016-09-29 2017-01-18 广东欧珀移动通信有限公司 Selfie stick length adjustment method, device and mobile terminal
CN107295254A (en) * 2017-06-21 2017-10-24 深圳传音通讯有限公司 Photo processing method and photo terminal
CN114531709A (en) * 2020-11-23 2022-05-24 中国联合网络通信集团有限公司 Network switching method and device
CN115035520A (en) * 2021-11-22 2022-09-09 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200506660A (en) * 2003-08-15 2005-02-16 Inventec Multimedia & Telecom Digital image classifying and indexing system based on image content and method thereof
CN101859367A (en) * 2009-04-07 2010-10-13 北京算通数字技术研究中心有限公司 Digital photo sorting method, device and application system thereof
CN101960481A (en) * 2008-01-02 2011-01-26 雅虎公司 Method and system for managing digital photos
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN103207870A (en) * 2012-01-17 2013-07-17 华为技术有限公司 Method, server, device and system for photo sort management
CN103488756A (en) * 2013-09-25 2014-01-01 深圳市金立通信设备有限公司 Picture classification method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200506660A (en) * 2003-08-15 2005-02-16 Inventec Multimedia & Telecom Digital image classifying and indexing system based on image content and method thereof
CN101960481A (en) * 2008-01-02 2011-01-26 雅虎公司 Method and system for managing digital photos
CN101859367A (en) * 2009-04-07 2010-10-13 北京算通数字技术研究中心有限公司 Digital photo sorting method, device and application system thereof
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN103207870A (en) * 2012-01-17 2013-07-17 华为技术有限公司 Method, server, device and system for photo sort management
CN103488756A (en) * 2013-09-25 2014-01-01 深圳市金立通信设备有限公司 Picture classification method and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
乔进等: "基于多级分类器的自由手写数字在线识别", 《重庆大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341605A (en) * 2016-09-29 2017-01-18 广东欧珀移动通信有限公司 Selfie stick length adjustment method, device and mobile terminal
CN106341605B (en) * 2016-09-29 2019-10-01 Oppo广东移动通信有限公司 A kind of self-shooting bar length adjusting method, device and mobile terminal
CN107295254A (en) * 2017-06-21 2017-10-24 深圳传音通讯有限公司 Photo processing method and photo terminal
CN107295254B (en) * 2017-06-21 2021-01-15 深圳传音通讯有限公司 Photo processing method and photographing terminal
CN114531709A (en) * 2020-11-23 2022-05-24 中国联合网络通信集团有限公司 Network switching method and device
CN115035520A (en) * 2021-11-22 2022-09-09 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium

Also Published As

Publication number Publication date
CN105654101B (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109961406A (en) A kind of method, apparatus and terminal device of image procossing
CN105930797B (en) A kind of face verification method and device
CN105654101A (en) Information processing method and electronic equipment
CN106446223B (en) Map data processing method and device
CN106210521A (en) A kind of photographic method and terminal
CN106503656A (en) A kind of image classification method, device and computing device
US20150109290A1 (en) Device and method for removing noise points in point clouds
CN102955664A (en) Method and system for electronic book display adjustment
CN105760533A (en) Photo management method and photo management device
CN105740213A (en) Presentation template providing method and device
CN103744598B (en) A kind of method and apparatus of information processing
CN104750688A (en) Photo management method and system
CN103353881A (en) Method and device for searching application
CN106202392A (en) A kind of photo classification method and terminal
CN103079016A (en) Photographed face transformation method and intelligent terminal
CN107682691B (en) A kind of method, terminal and the computer readable storage medium of camera focus calibration
CN104952058B (en) A kind of method and electronic equipment of information processing
CN106056090A (en) Fingerprint image processing method and device
CN103605957A (en) Image identification method and device thereof
US20150051724A1 (en) Computing device and simulation method for generating a double contour of an object
CN105677619A (en) Method and device for adjusting paragraph spacing
RU2624558C2 (en) Method, terminal and server for file fields adjustment
CN103218775A (en) Method of changing faces in portrait photos
CN105893578A (en) Method and device for selecting photos
CN103284866A (en) Walking auxiliary system and walking auxiliary method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant