CN106778500A - A kind of method and apparatus for obtaining people's object plane phase information - Google Patents

A kind of method and apparatus for obtaining people's object plane phase information Download PDF

Info

Publication number
CN106778500A
CN106778500A CN201611042298.8A CN201611042298A CN106778500A CN 106778500 A CN106778500 A CN 106778500A CN 201611042298 A CN201611042298 A CN 201611042298A CN 106778500 A CN106778500 A CN 106778500A
Authority
CN
China
Prior art keywords
target
face
object plane
image
face characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611042298.8A
Other languages
Chinese (zh)
Other versions
CN106778500B (en
Inventor
王倩
罗序满
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201611042298.8A priority Critical patent/CN106778500B/en
Publication of CN106778500A publication Critical patent/CN106778500A/en
Application granted granted Critical
Publication of CN106778500B publication Critical patent/CN106778500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of method and apparatus for obtaining people's object plane phase information, belong to technical field of image processing.Methods described includes:Obtain target image, determine the human face region image in the target image, in default multiple face characteristic informations, determine the target face characteristic information that the human face region image meets, according to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, determine the corresponding target person object plane phase information of the target face characteristic information, the target person object plane phase information is shown.Using the present invention, the promptness for obtaining people's object plane phase information can be improved.

Description

A kind of method and apparatus for obtaining people's object plane phase information
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of method and apparatus for obtaining people's object plane phase information.
Background technology
There are many websites to provide the function of showing people's object plane phase information according to human face photo at present, user can be by these Website obtains people's object plane phase information.
The human face photo that user will can shoot is uploaded to above-mentioned website, and the technical staff of website is getting face photograph After piece, can human face photo be transferred into the professional person of survey face phase to evaluate and test, obtain corresponding people's object plane phase information, afterwards, net People's object plane phase information can be fed back to user by the technical staff for standing.
Realize it is of the invention during, inventor find prior art at least there is problems with:
Be uploaded to human face photo behind website, it is necessary to wait the technical staff of website to obtain human face photo, and wait by user Professional person is evaluated and tested, and can just finally give corresponding people's object plane phase information, so, obtains the timely of people object plane phase information Property is poor.
The content of the invention
In order to solve problem of the prior art, the embodiment of the invention provides it is a kind of obtain people's object plane phase information method and Device.The technical scheme is as follows:
First aspect, there is provided a kind of method of acquisition people object plane phase information, methods described includes:
Target image is obtained, the human face region image in the target image is determined;
In default multiple face characteristic informations, the target face characteristic letter that the human face region image meets is determined Breath;
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, determine that the target face is special Reference ceases corresponding target person object plane phase information;
The target person object plane phase information is shown.
Optionally, the human face region image determined in the target image, including:
Display target image, and the viewing area choice box in the target image;
When the regional choice for receiving user input is instructed, the target image is located in the regional choice frame Area image, is defined as human face region image.
So, user can face making area image in the target image manually.
Optionally, the human face region image determined in the target image, including:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
Optionally, it is described in default multiple face characteristic informations, determine the target that the human face region image meets Face characteristic information, including:
Obtain at least one target face administrative division map selected in human face region image of the user in the target image Picture;
In default multiple face characteristic informations, the target face that each target face area image meets is determined respectively Characteristic information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine the target person The corresponding target person object plane phase information of face characteristic information, including:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target person The target face characteristic information that face area image meets, determines each corresponding target person object plane of target face area image respectively Phase information.
So, user can simultaneously obtain people's object plane phase information of multiple personages in target image.
Optionally, the number of the target face area image of selection is multiple;
Methods described also includes:
In default multiple face characteristic matching conditions, determine what is met between the multiple target face area image Target face characteristic matching condition;
According to the face characteristic matching condition and the corresponding relation of face associated information that prestore, the target person is determined The corresponding target face associated information of face characteristic matching condition;
The target face associated information is shown.
So, user can obtain the face associated information between multiple personages in target image.
Optionally, in the corresponding relation of the face characteristic information for prestoring and people's object plane phase information, each personage Face phase information correspondence multiple face characteristic information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine the target person The corresponding target person object plane phase information of face characteristic information, including:
For everyone the object plane phase information in the corresponding relation, determine that the corresponding face of people's object plane phase information is special Reference ceases and the matching number between the target face characteristic information;
By people's object plane phase information that corresponding matching number is maximum, it is defined as the corresponding mesh of the target face characteristic information Mark people's object plane phase information.
So, the people object plane phase information higher with the matching degree of face characteristic information can be got.
Second aspect, there is provided a kind of device of acquisition people object plane phase information, described device includes:
First determining module, for obtaining target image, determines the human face region image in the target image;
Second determining module, in default multiple face characteristic informations, determining that the human face region image meets Target face characteristic information;
3rd determining module, the face characteristic information and the corresponding relation of people's object plane phase information prestored for basis, Determine the corresponding target person object plane phase information of the target face characteristic information;
First display module, for being shown to the target person object plane phase information.
Optionally, first determining module, including:
Display unit, for display target image, and the viewing area choice box in the target image;
Determining unit, for when the regional choice for receiving user input is instructed, the target image being located at described Area image in regional choice frame, is defined as human face region image.
Optionally, first determining module, is used for:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
Optionally, second determining module, including:
Acquiring unit, for obtaining at least one mesh selected in human face region image of the user in the target image Mark human face region image;
Determining unit, in default multiple face characteristic informations, determining each target face area image respectively The target face characteristic information for meeting;
3rd determining module, is used for:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target person The target face characteristic information that face area image meets, determines each corresponding target person object plane of target face area image respectively Phase information.
Optionally, the number of the target face area image of selection is multiple;
Described device also includes:
4th determining module, in default multiple face characteristic matching conditions, determining the multiple target face The target face characteristic matching condition met between area image;
5th determining module, for being closed according to the correspondence of the face characteristic matching condition and face associated information for prestoring System, determines the corresponding target face associated information of the target face characteristic matching condition;
Second display module, for being shown to the target face associated information.
Optionally, in the corresponding relation of the face characteristic information for prestoring and people's object plane phase information, each personage Face phase information correspondence multiple face characteristic information;
3rd determining module, including:
First determining unit, for for everyone the object plane phase information in the corresponding relation, determining people's object plane Matching number between the corresponding face characteristic information of phase information and the target face characteristic information;
Second determining unit, for by the maximum people's object plane phase information of corresponding matching number, being defined as the target person The corresponding target person object plane phase information of face characteristic information.
The beneficial effect that technical scheme provided in an embodiment of the present invention is brought is:
In the embodiment of the present invention, target image is obtained, the human face region image in target image is determined, in default multiple In face characteristic information, the target face characteristic information that human face region image meets is determined, according to the face characteristic for prestoring The corresponding relation of information and people's object plane phase information, determines the corresponding target person object plane phase information of target face characteristic information, to mesh Mark people's object plane phase information is shown.So, terminal can analyze the face in photo after the photo for getting user's shooting Area image, determines face characteristic information, and then the corresponding relation using face characteristic information and people's object plane phase information is quick People's object plane phase information of face in photo is determined, user need not wait as long for, can improve and obtain people's object plane phase information Promptness.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will make needed for embodiment description Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is a kind of method flow diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Fig. 2 is a kind of displaying schematic diagram of people's object plane phase information provided in an embodiment of the present invention;
Fig. 3 is a kind of displaying schematic diagram of people's object plane phase information provided in an embodiment of the present invention;
Fig. 4 is a kind of displaying schematic diagram of people's object plane phase information provided in an embodiment of the present invention;
Fig. 5 is a kind of apparatus structure schematic diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Fig. 6 is a kind of apparatus structure schematic diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Fig. 7 is a kind of apparatus structure schematic diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Fig. 8 is a kind of apparatus structure schematic diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Fig. 9 is a kind of apparatus structure schematic diagram for obtaining people's object plane phase information provided in an embodiment of the present invention;
Figure 10 is a kind of structural representation of terminal provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
A kind of method for obtaining people's object plane phase information is the embodiment of the invention provides, the executive agent of the method is terminal. Wherein, terminal can be any mobile terminal with image processing function, such as mobile phone, panel computer.Can be with the terminal Processor, memory, screen, image unit, communication component, input block are provided with, processor can be used for obtaining personage The process of face phase information is processed, and memory can be used for storing the number of the data and generation needed in following processing procedures According to screen is displayed for needing to be shown to the content of user in following processing procedures, and knot is mutually evaluated and tested in such as target image, face Really, image unit can be terminal camera, and for shooting facial image, input block can be used for user be input into terminal Instruction or configuration information, such as keyboard, microphone, communication component can be used for being transmitted between server in following processing procedures The data being related to, such as bluetooth component, antenna.In the present embodiment, so that terminal is as mobile phone as an example, the detailed description of scheme is carried out, Other situations are similar to therewith, and the present embodiment is not repeated.
Below in conjunction with specific embodiment, the handling process shown in Fig. 1 is described in detail, content can be as Under:
Step 101, obtains target image, determines the human face region image in target image.
In force, the application program for surveying face phase can be installed, user can use the application program in terminal Face is carried out to personage mutually to evaluate and test, specifically, user opens the application program of above-mentioned survey face phase, then select a people for having stored Thing photo (i.e. target image), target image is obtained such that it is able to triggering terminal, and terminal can determine in the target image afterwards Human face region image.Or, the function that function can be belonging to be embedded in terminal camera application is mutually evaluated and tested in above-mentioned face, specifically , user can shoot a portrait photographs with using terminal camera applications, and terminal can obtain and show the photo for shooting and completing (i.e. target image), while face can be shown on terminal interface mutually evaluates and tests control, user can click on the face and mutually evaluate and test afterwards Control, so that terminal can detect the triggering command that control is mutually evaluated and tested in the face for acting on, then determines people in the target image Face area image.It should be noted that triggering command can be generated by clicking operation, it is also possible to generated by operation by long, herein It is not specifically limited, the explanation of scheme is carried out only by taking clicking operation as an example.Herein, face mutually evaluate and test control can be with the shape of suspended window Formula is present, i.e., terminal in the arbitrary interface of display terminal camera applications, mutually evaluate and test control and be displayed in the interface the superiors all the time by face.
Optionally, user can choose face area image manually, and accordingly, the part treatment of step 101 can be as follows:It is aobvious Show target image, and viewing area choice box in the target image, when the regional choice for receiving user input is instructed, by mesh Logo image is located at the area image in regional choice frame, is defined as human face region image.
In force, terminal is after target image is got, and can include on screen target image, while can be Viewing area choice box in target image, user can ore deposit be chosen face administrative division map in the target image using the regional choice frame Picture, the regional choice frame can initially be arranged to face shape, and the shape that user can arbitrarily change the regional choice frame is big It is small, it is possible to any moving area choice box in the target image.Further, if there are multiple face areas in target image Area image, user needs frame to select multiple human face region images, then can in the target image add arbitrary number regional choice Frame.So, human face region has been selected using regional choice circle in user, and after clicking on confirmation button, terminal can then be received The regional choice instruction of user input, the area image that then will be located in target image in regional choice frame is defined as face Area image.It is understood that the position of regional choice frame can be fixed in this programme, user can be by mobile mesh The mode of logo image come complete human face region frame select.
Optionally, terminal can be with the human face region image in automatic detection target image, accordingly, the part of step 101 Treatment can be as follows:Recognition of face is carried out to target image, the human face region image in target image is determined.
In force, terminal can enter pedestrian based on face recognition algorithms after target image is got to target image Face identifying processing, then can determine the human face region image in target image.Face recognition algorithms belong to existing algorithm, It is not specifically described herein.
Step 102, in default multiple face characteristic informations, determines the target face characteristic that human face region image meets Information.
In force, multiple face characteristic informations can be pre-set for face, can specifically determines the multiple in face How different face characteristic region, such as forehead, eyes, nose, chin, then for each face characteristic area, can have Face characteristic information, for example, for forehead, forehead high, low forehead, broad forehead head, narrow forehead etc., each face characteristic can be divided into Information can be to that should have face phase judgment criteria broad forehead head for example when forehead width is more than predetermined threshold value, less than default threshold It is narrow forehead during value.The content of above-mentioned setting can the technical staff of server side set, then sent by server Stored to terminal.So, terminal, can be according to default face after the human face region image during target image is determined Phase judgment criteria is evaluated and tested to the face characteristic region in human face region image, may thereby determine that face area in target image The target face characteristic information that area image meets.It is understood that face area image one personage of correspondence, is present many The individual target face characteristic information for meeting.
Step 103, according to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, determines target person The corresponding target person object plane phase information of face characteristic information.
In force, terminal can be previously stored with the corresponding relation of face characteristic information and people's object plane phase information, one In bar corresponding relation, multiple face characteristic informations of people's object plane phase information and different face characteristic areas can be included, i.e., The target face characteristic information that can be met by a face area image determines corresponding people's object plane phase information jointly.Example Such as, people's object plane phase information is " genius phase ", while to that should have " forehead high ", " oxeye ", " thin eyebrow ", " bridge of the nose wide " etc. a plurality of Face characteristic information.Above-mentioned corresponding relation can the technical staff of server side set, be then sent to by server What terminal was stored.So, after the target face characteristic information that human face region image during target image is determined meets, can According to above-mentioned corresponding relation, to determine the corresponding target person object plane phase information of target face characteristic information.
Optionally, in the face characteristic information and the corresponding relation of people's object plane phase information that prestore, everyone object plane phase Information correspondence multiple face characteristic information, accordingly, the treatment of step 103 can be as follows:For everyone in corresponding relation Object plane phase information, determines the coupling number between the corresponding face characteristic information of people's object plane phase information and target face characteristic information Mesh, by people's object plane phase information that corresponding matching number is maximum, is defined as the corresponding target person object plane of target face characteristic information Phase information.
In force, after the target face characteristic information that the human face region image during target image is determined meets, can With elder generation in the face characteristic information and the corresponding relation of people's object plane phase information for prestoring, everyone object plane phase information pair is determined Matching number between the face characteristic information and target face characteristic information answered, then can wherein by corresponding matching number Maximum people's object plane phase information, is defined as the corresponding target person object plane phase information of target face characteristic information.Such as target face Characteristic information includes 1,2,3,4, there are 3 people's object plane phase informations A, B, C in corresponding relation, and the corresponding face characteristic informations of A have 1st, 2,3,5, matching number 3, the corresponding face characteristic informations of B have 1,3,5,6, and matching number is the corresponding face characteristic letters of 2, C Breath has 4,6,7,9, and matching number is 1, then can determine that target person object plane phase information is the maximum A of corresponding matching number.
Step 104, is shown to target person object plane phase information.
In force, it is determined that the corresponding target of target face characteristic information that human face region image meets in target image After people's object plane phase information, the target person object plane phase information can be shown, i.e., on the basis of terminal display target image, Target person object plane phase information is shown the lower section of human face region image in the target image, specifically can be as shown in Figure 2.
Optionally, in the presence of the face of at least one people in target image, user can select to obtain who people's object plane Phase information, accordingly, step 102 and 103 treatment can be as follows:In acquisition user human face region image in the target image At least one target face area image of selection, in default multiple face characteristic informations, determines each target person respectively The target face characteristic information that face area image meets, according to the right of the face characteristic information and people's object plane phase information for prestoring Should be related to, and the target face characteristic information that each target face area image meets, each target face area is determined respectively The corresponding target person object plane phase information of area image.
In force, in the presence of the face of at least one people in target image, so, terminal can be detected in the target image To at least one human face region image.Selection is clicked at least one human face region image that user can be in the target image extremely The few face of people, it is thus possible to triggering terminal is selected extremely in obtaining user's human face region image in the target image A few target face area image.Afterwards, for each the target human face region at least one target face area image Image, can evaluate and test according to default face phase judgment criteria to the face characteristic region in target face area image, from And the target face characteristic information that target face area image meets can be determined.Further, can be according to prestoring The corresponding relation of face characteristic information and people's object plane phase information, and the target face that each target face area image meets is special Reference is ceased, and each corresponding target person object plane phase information of target face area image is determined respectively.For example, existing in target image Two faces of people of A, B, user have selected the corresponding human face region image of A, B, the face area that terminal then can respectively according to A, B Area image generates people's object plane phase information of corresponding two people of A, B, is then respectively displayed on A, B human face region figure in target image The lower section of picture, as shown in Figure 3.
Optionally, when there is the face of many individuals in target image, the face associated informations of many individuals can be obtained, accordingly , above-mentioned treatment can be as follows:The number of the target face area image of selection is multiple, in default multiple face characteristics In with condition, it is determined that the target face characteristic matching condition met between multiple target face area image, according to prestoring Face characteristic matching condition and face associated information corresponding relation, determine the corresponding target of target face characteristic matching condition Face associated information, is shown to target face associated information.
In force, when in target image if there is the human face region image of many individuals, user can be in target image In multiple human face region images in click on many personal faces of selection, it is thus possible to triggering terminal obtains many of user's selection Individual target face area image.The technical staff of server side can pre-set multiple face associated informations, the associated letter in face Breath embodies the information of relation between many individuals according to many personal face characteristic information generations, and face associated information can have: Man and wife's phase, Xiong Dixiang, family of three phase etc..Each face associated information can to that should have a face characteristic matching condition, Each face characteristic matching condition is used for the matching degree of the face characteristic information for judging multiple faces, for example, for man and wife's phase, people Face characteristic matching condition can be total similarity of the face characteristic area such as forehead, eyes, nose, mouth in pre-set interval.Should Face characteristic matching condition can also the technical staff of server side set.Server can be by above-mentioned being provided with Appearance is sent to terminal and is stored.Afterwards, terminal can be according to default face phase judgment criteria to multiple target face administrative division maps Face characteristic region as in is evaluated and tested, and may thereby determine that and contrast the target person that multiple target face area images meet Face characteristic information, then in default multiple face characteristic matching conditions, it is determined that full between multiple target face area images The target face characteristic matching condition of foot, and then can be according to the face characteristic matching condition and face associated information for prestoring Corresponding relation, determine the corresponding target face associated information of target face characteristic matching condition, afterwards can be by target face phase Related information is displayed in the lower section in facial image region, as shown in Figure 4.It should be noted that mutually evaluating and testing function, terminal for face Single face facies model can be set and face facies model is associated, under single face facies model, the multiple human face region images of user's selection Afterwards, terminal can respectively generate multiple people's object plane phase informations, and in the case where face facies model is associated, the multiple human face region figures of user's selection As after, terminal can generate the face associated information for embodying many personal relationships.
It should be noted that the treatment of step 101- steps 104 can also mainly be realized by server, terminal auxiliary is real Existing, i.e., the process with user mutual is realized by terminal, and then interaction results can be sent to server by terminal, server base again The treatment of other correlation steps is carried out in interaction results.
In the embodiment of the present invention, target image is obtained, the human face region image in target image is determined, in default multiple In face characteristic information, the target face characteristic information that human face region image meets is determined, according to the face characteristic for prestoring The corresponding relation of information and people's object plane phase information, determines the corresponding target person object plane phase information of target face characteristic information, to mesh Mark people's object plane phase information is shown.So, terminal can analyze the face in photo after the photo for getting user's shooting Area image, determines face characteristic information, and then the corresponding relation using face characteristic information and people's object plane phase information is quick People's object plane phase information of face in photo is determined, user need not wait as long for, can improve and obtain people's object plane phase information Promptness.
Based on identical technology design, the embodiment of the present invention additionally provides a kind of device for obtaining people's object plane phase information, such as Shown in Fig. 5, the device includes:
First determining module 501, for obtaining target image, determines the human face region image in the target image;
Second determining module 502, in default multiple face characteristic informations, determining the human face region image symbol The target face characteristic information of conjunction;
3rd determining module 503, for being closed according to the correspondence of the face characteristic information and people's object plane phase information for prestoring System, determines the corresponding target person object plane phase information of the target face characteristic information;
First display module 504, for being shown to the target person object plane phase information.
Optionally, as shown in fig. 6, first determining module 501, including:
Display unit 5011, for display target image, and the viewing area choice box in the target image;
Determining unit 5012, for when the regional choice for receiving user input is instructed, the target image being located at Area image in the regional choice frame, is defined as human face region image.
Optionally, first determining module 501, is used for:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
Optionally, as shown in fig. 7, second determining module 502, including:
Acquiring unit 5021, for obtaining selection in human face region image of the user in the target image at least one Individual target face area image;
Determining unit 5022, in default multiple face characteristic informations, each target human face region being determined respectively The target face characteristic information that image meets;
3rd determining module 503, is used for:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target person The target face characteristic information that face area image meets, determines each corresponding target person object plane of target face area image respectively Phase information.
Optionally, the number of the target face area image of selection is multiple;
As shown in figure 8, described device also includes:
4th determining module 505, in default multiple face characteristic matching conditions, determining the multiple target person The target face characteristic matching condition met between face area image;
5th determining module 506, for according to the right of the face characteristic matching condition and face associated information for prestoring Should be related to, determine the corresponding target face associated information of the target face characteristic matching condition;
Second display module 507, for being shown to the target face associated information.
Optionally, in the corresponding relation of the face characteristic information for prestoring and people's object plane phase information, each personage Face phase information correspondence multiple face characteristic information;
As shown in figure 9, the 3rd determining module 503, including:
First determining unit 5031, for for everyone the object plane phase information in the corresponding relation, determining the people Matching number between the corresponding face characteristic information of object plane phase information and the target face characteristic information;
Second determining unit 5032, for by the maximum people's object plane phase information of corresponding matching number, being defined as the mesh The corresponding target person object plane phase information of mark face characteristic information.
In the embodiment of the present invention, target image is obtained, the human face region image in target image is determined, in default multiple In face characteristic information, the target face characteristic information that human face region image meets is determined, according to the face characteristic for prestoring The corresponding relation of information and people's object plane phase information, determines the corresponding target person object plane phase information of target face characteristic information, to mesh Mark people's object plane phase information is shown.So, terminal can analyze the face in photo after the photo for getting user's shooting Area image, determines face characteristic information, and then the corresponding relation using face characteristic information and people's object plane phase information is quick People's object plane phase information of face in photo is determined, user need not wait as long for, can improve and obtain people's object plane phase information Promptness.
It should be noted that:The device of acquisition people's object plane phase information that above-described embodiment is provided is obtaining people's object plane phase information When, only carried out with the division of above-mentioned each functional module for example, in practical application, as needed can divide above-mentioned functions With being completed by different functional module, will the internal structure of device be divided into different functional modules, to complete above description All or part of function.In addition, the device and acquisition people's object plane phase of acquisition people's object plane phase information of above-described embodiment offer The embodiment of the method for information belongs to same design, and it implements process and refers to embodiment of the method, repeats no more here.
A kind of structural representation of terminal that the another exemplary embodiment of the disclosure is provided.The terminal can be mobile phone Deng.
Reference picture 10, terminal 1000 can include following one or more assemblies:Processing assembly 1002, memory 1004, Power supply module 1007, multimedia groupware 1008, audio-frequency assembly 1010, the interface 1012 of input/output (I/O), sensor cluster 1014, and communication component 1017.
The integrated operation of the usual control terminal 1000 of processing assembly 1002, such as with display, call, data communication, Camera operation and the associated operation of record operation.Treatment element 1002 can include one or more processors 1020 to perform Instruction, to complete all or part of step of above-mentioned method.Additionally, processing assembly 1002 can include one or more moulds Block, is easy to the interaction between processing assembly 1002 and other assemblies.For example, processing component 1002 can include multi-media module, To facilitate the interaction between multimedia groupware 1008 and processing assembly 1002.
Memory 1004 is configured as storing various types of data supporting the operation in terminal 1000.These data Instruction of the example including any application program or method for being operated in terminal 1000, contact data, telephone book data, Message, picture, video etc..Memory 1004 can by any kind of volatibility or non-volatile memory device or they Combination realizes, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), it is erasable can Program read-only memory (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash memory Reservoir, disk or CD.
Electric power assembly 1007 provides electric power for the various assemblies of terminal 1000.Electric power assembly 1007 can include power management System, one or more power supplys, and other generate, manage and distribute the group that electric power is associated with for audio output apparatus 1000 Part.
Multimedia groupware 1008 is included in one screen of output interface of offer between the terminal 1000 and user. In some embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, Screen may be implemented as touch-screen, to receive the input signal from user.Touch panel includes that one or more touches are passed Sensor is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or slip be dynamic The border of work, but also the detection duration related to the touch or slide and pressure.In certain embodiments, it is many Media component 1008 includes a front camera and/or rear camera.When terminal 1000 is in operator scheme, mould is such as shot When formula or video mode, front camera and/or rear camera can receive outside multi-medium data.Each preposition shooting Head and rear camera can be a fixed optical lens systems or with focusing and optical zoom capabilities.
Audio-frequency assembly 1010 is configured as output and/or input audio signal.For example, audio-frequency assembly 1010 includes a wheat Gram wind (MIC), when audio output apparatus 1000 are in operator scheme, such as call model, logging mode and speech recognition mode, Microphone is configured as receiving external audio signal.The audio signal for being received can be further stored in memory 1004 or Sent via communication component 1017.
I/O interfaces 1012 are that interface, above-mentioned peripheral interface module are provided between processing assembly 1002 and peripheral interface module Can be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and Locking press button.
Sensor cluster 1014 includes one or more sensors, and the state for providing various aspects for terminal 1000 is commented Estimate.For example, sensor cluster 1014 can detect the opening/closed mode of terminal 1000, the relative positioning of component, such as institute Display and keypad that component is terminal 1000 are stated, sensor cluster 1014 can also detect terminal 1000 or terminal 1,000 1 The position of individual component changes, and user is presence or absence of with what terminal 1000 was contacted, the orientation of terminal 1000 or acceleration/deceleration and end The temperature change at end 1000.Sensor cluster 1014 can include proximity transducer, be configured to without any physics The presence of object nearby is detected during contact.Sensor cluster 1014 can also include optical sensor, and such as CMOS or ccd image are sensed Device, for being used in imaging applications.In certain embodiments, the sensor cluster 1014 can also include acceleration sensing Device, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 1017 is configured to facilitate the communication of wired or wireless way between terminal 1000 and other equipment.Eventually End 1000 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.It is exemplary at one In embodiment, communication component 1017 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel Information.In one exemplary embodiment, the communication component 1017 also includes near-field communication (NFC) module, to promote short distance Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, terminal 1000 can be by one or more application specific integrated circuits (ASIC), numeral Signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components realization, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 1004 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 1020 of terminal 1000.Example Such as, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, soft Disk and optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is held by the processor of terminal During row so that terminal is able to carry out a kind of method for obtaining people's object plane phase information, the method includes:
Target image is obtained, the human face region image in the target image is determined;
In default multiple face characteristic informations, the target face characteristic letter that the human face region image meets is determined Breath;
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, determine that the target face is special Reference ceases corresponding target person object plane phase information;
The target person object plane phase information is shown.
Optionally, the human face region image determined in the target image, including:
Display target image, and the viewing area choice box in the target image;
When the regional choice for receiving user input is instructed, the target image is located in the regional choice frame Area image, is defined as human face region image.
Optionally, the human face region image determined in the target image, including:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
Optionally, it is described in default multiple face characteristic informations, determine the target that the human face region image meets Face characteristic information, including:
Obtain at least one target face administrative division map selected in human face region image of the user in the target image Picture;
In default multiple face characteristic informations, the target face that each target face area image meets is determined respectively Characteristic information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine the target person The corresponding target person object plane phase information of face characteristic information, including:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target person The target face characteristic information that face area image meets, determines each corresponding target person object plane of target face area image respectively Phase information.
Optionally, the number of the target face area image of selection is multiple;
Methods described also includes:
In default multiple face characteristic matching conditions, determine what is met between the multiple target face area image Target face characteristic matching condition;
According to the face characteristic matching condition and the corresponding relation of face associated information that prestore, the target person is determined The corresponding target face associated information of face characteristic matching condition;
The target face associated information is shown.
Optionally, in the corresponding relation of the face characteristic information for prestoring and people's object plane phase information, each personage Face phase information correspondence multiple face characteristic information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine the target person The corresponding target person object plane phase information of face characteristic information, including:
For everyone the object plane phase information in the corresponding relation, determine that the corresponding face of people's object plane phase information is special Reference ceases and the matching number between the target face characteristic information;
By people's object plane phase information that corresponding matching number is maximum, it is defined as the corresponding mesh of the target face characteristic information Mark people's object plane phase information.
In the embodiment of the present invention, target image is obtained, the human face region image in target image is determined, in default multiple In face characteristic information, the target face characteristic information that human face region image meets is determined, according to the face characteristic for prestoring The corresponding relation of information and people's object plane phase information, determines the corresponding target person object plane phase information of target face characteristic information, to mesh Mark people's object plane phase information is shown.So, terminal can analyze the face in photo after the photo for getting user's shooting Area image, determines face characteristic information, and then the corresponding relation using face characteristic information and people's object plane phase information is quick People's object plane phase information of face in photo is determined, user need not wait as long for, can improve and obtain people's object plane phase information Promptness.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can be by hardware To complete, it is also possible to instruct the hardware of correlation to complete by program, described program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only storage, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all it is of the invention spirit and Within principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.

Claims (13)

1. it is a kind of obtain people's object plane phase information method, it is characterised in that methods described includes:
Target image is obtained, the human face region image in the target image is determined;
In default multiple face characteristic informations, the target face characteristic information that the human face region image meets is determined;
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, the target face characteristic letter is determined Cease corresponding target person object plane phase information;
The target person object plane phase information is shown.
2. method according to claim 1, it is characterised in that the human face region figure in the determination target image Picture, including:
Display target image, and the viewing area choice box in the target image;
When the regional choice for receiving user input is instructed, the target image is located at the region in the regional choice frame Image, is defined as human face region image.
3. method according to claim 1, it is characterised in that the human face region figure in the determination target image Picture, including:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
4. method according to claim 1, it is characterised in that described in default multiple face characteristic informations, it is determined that The target face characteristic information that the human face region image meets, including:
Obtain at least one target face area image selected in human face region image of the user in the target image;
In default multiple face characteristic informations, the target face characteristic that each target face area image meets is determined respectively Information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine that the target face is special Reference ceases corresponding target person object plane phase information, including:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target face area The target face characteristic information that area image meets, determines that each corresponding target person object plane of target face area image is believed respectively Breath.
5. method according to claim 4, it is characterised in that the number of the target face area image of selection is multiple;
Methods described also includes:
In default multiple face characteristic matching conditions, the target met between the multiple target face area image is determined Face characteristic matching condition;
According to the face characteristic matching condition and the corresponding relation of face associated information that prestore, determine that the target face is special Levy the corresponding target face associated information of matching condition;
The target face associated information is shown.
6. method according to claim 1, it is characterised in that the face characteristic information for prestoring and people's object plane phase In the corresponding relation of information, the corresponding multiple face characteristic informations of everyone object plane phase information;
Face characteristic information and the corresponding relation of people's object plane phase information that the basis is prestored, determine that the target face is special Reference ceases corresponding target person object plane phase information, including:
For everyone the object plane phase information in the corresponding relation, the corresponding face characteristic letter of people's object plane phase information is determined Matching number between breath and the target face characteristic information;
By people's object plane phase information that corresponding matching number is maximum, it is defined as the corresponding target person of the target face characteristic information Object plane phase information.
7. it is a kind of obtain people's object plane phase information device, it is characterised in that described device includes:
First determining module, for obtaining target image, determines the human face region image in the target image;
Second determining module, in default multiple face characteristic informations, determining the mesh that the human face region image meets Mark face characteristic information;
3rd determining module, the face characteristic information and the corresponding relation of people's object plane phase information prestored for basis, it is determined that The corresponding target person object plane phase information of the target face characteristic information;
First display module, for being shown to the target person object plane phase information.
8. device according to claim 7, it is characterised in that first determining module, including:
Display unit, for display target image, and the viewing area choice box in the target image;
Determining unit, for when the regional choice for receiving user input is instructed, the target image being located at into the region Area image in choice box, is defined as human face region image.
9. device according to claim 7, it is characterised in that first determining module, is used for:
Recognition of face is carried out to the target image, the human face region image in the target image is determined.
10. device according to claim 7, it is characterised in that second determining module, including:
Acquiring unit, for obtaining at least one target person selected in human face region image of the user in the target image Face area image;
Determining unit, in default multiple face characteristic informations, determining that each target face area image meets respectively Target face characteristic information;
3rd determining module, is used for:
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, and described each target face area The target face characteristic information that area image meets, determines that each corresponding target person object plane of target face area image is believed respectively Breath.
11. devices according to claim 10, it is characterised in that the number of the target face area image of selection is many It is individual;
Described device also includes:
4th determining module, in default multiple face characteristic matching conditions, determining the multiple target human face region The target face characteristic matching condition met between image;
5th determining module, the face characteristic matching condition and the corresponding relation of face associated information prestored for basis, Determine the corresponding target face associated information of the target face characteristic matching condition;
Second display module, for being shown to the target face associated information.
12. devices according to claim 7, it is characterised in that the face characteristic information for prestoring and people's object plane In the corresponding relation of phase information, the corresponding multiple face characteristic informations of everyone object plane phase information;
3rd determining module, including:
First determining unit, for for everyone the object plane phase information in the corresponding relation, determining that people's object plane is believed Cease the matching number between corresponding face characteristic information and the target face characteristic information;
Second determining unit, for by the maximum people's object plane phase information of corresponding matching number, being defined as the target face special Reference ceases corresponding target person object plane phase information.
A kind of 13. devices for obtaining people's object plane phase information, it is characterised in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
Target image is obtained, the human face region image in the target image is determined;
In default multiple face characteristic informations, the target face characteristic information that the human face region image meets is determined;
According to the face characteristic information and the corresponding relation of people's object plane phase information that prestore, the target face characteristic letter is determined Cease corresponding target person object plane phase information;
The target person object plane phase information is shown.
CN201611042298.8A 2016-11-11 2016-11-11 A kind of method and apparatus obtaining personage face phase information Active CN106778500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611042298.8A CN106778500B (en) 2016-11-11 2016-11-11 A kind of method and apparatus obtaining personage face phase information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611042298.8A CN106778500B (en) 2016-11-11 2016-11-11 A kind of method and apparatus obtaining personage face phase information

Publications (2)

Publication Number Publication Date
CN106778500A true CN106778500A (en) 2017-05-31
CN106778500B CN106778500B (en) 2019-09-17

Family

ID=58974615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611042298.8A Active CN106778500B (en) 2016-11-11 2016-11-11 A kind of method and apparatus obtaining personage face phase information

Country Status (1)

Country Link
CN (1) CN106778500B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233254A1 (en) * 2017-06-21 2018-12-27 格力电器(武汉)有限公司 Terminal-based object recognition method, device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN101980242B (en) * 2010-09-30 2014-04-09 徐勇 Human face discrimination method and system and public safety system
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device
CN105389548A (en) * 2015-10-23 2016-03-09 南京邮电大学 Love and marriage evaluation system and method based on face recognition
CN105718869A (en) * 2016-01-15 2016-06-29 网易(杭州)网络有限公司 Method and device for estimating face score in picture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN101980242B (en) * 2010-09-30 2014-04-09 徐勇 Human face discrimination method and system and public safety system
CN105389548A (en) * 2015-10-23 2016-03-09 南京邮电大学 Love and marriage evaluation system and method based on face recognition
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device
CN105718869A (en) * 2016-01-15 2016-06-29 网易(杭州)网络有限公司 Method and device for estimating face score in picture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233254A1 (en) * 2017-06-21 2018-12-27 格力电器(武汉)有限公司 Terminal-based object recognition method, device and electronic equipment
US11256919B2 (en) 2017-06-21 2022-02-22 Gree Electric Appliances (Wuhan) Co., Ltd Method and device for terminal-based object recognition, electronic device

Also Published As

Publication number Publication date
CN106778500B (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN105809704B (en) Identify the method and device of image definition
CN105635567A (en) Shooting method and device
CN104700353B (en) Image filters generation method and device
CN106572299A (en) Camera switching-on method and device
CN106651955A (en) Method and device for positioning object in picture
CN106355573A (en) Target object positioning method and device in pictures
CN106231419A (en) Operation performs method and device
CN105302315A (en) Image processing method and device
CN107122679A (en) Image processing method and device
CN105631403A (en) Method and device for human face recognition
CN106373156A (en) Method and apparatus for determining spatial parameter by image and terminal device
CN106250894A (en) Card image recognition methods and device
CN106778531A (en) Face detection method and device
CN106778773A (en) The localization method and device of object in picture
CN106250921A (en) Image processing method and device
CN107563994A (en) The conspicuousness detection method and device of image
CN105208284B (en) Shoot based reminding method and device
CN104933419A (en) Method and device for obtaining iris images and iris identification equipment
CN106559631A (en) Method for processing video frequency and device
CN107168620A (en) Method, device, terminal device and the computer-readable recording medium of control terminal
WO2021047069A1 (en) Face recognition method and electronic terminal device
CN110378312A (en) Image processing method and device, electronic equipment and storage medium
CN107091704A (en) Pressure detection method and device
CN107343087A (en) Smart machine control method and device
CN105678266A (en) Method and device for combining photo albums of human faces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant