CN106295520B - A kind of fat or thin detection method and mobile terminal - Google Patents

A kind of fat or thin detection method and mobile terminal Download PDF

Info

Publication number
CN106295520B
CN106295520B CN201610609563.XA CN201610609563A CN106295520B CN 106295520 B CN106295520 B CN 106295520B CN 201610609563 A CN201610609563 A CN 201610609563A CN 106295520 B CN106295520 B CN 106295520B
Authority
CN
China
Prior art keywords
datum mark
coordinate
facial image
fat
thin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610609563.XA
Other languages
Chinese (zh)
Other versions
CN106295520A (en
Inventor
杨炬光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiwo Software Technology Co.,Ltd.
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201610609563.XA priority Critical patent/CN106295520B/en
Publication of CN106295520A publication Critical patent/CN106295520A/en
Application granted granted Critical
Publication of CN106295520B publication Critical patent/CN106295520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a kind of fat or thin detection method and mobile terminal, method therein is applied to the mobile terminal with camera, specifically includes: obtaining the target facial image of the camera acquisition;Extract the face characteristic information in the target facial image;Based on the face characteristic information, fat or thin variability index is determined.Fat or thin detection method provided in an embodiment of the present invention and mobile terminal can satisfy user in the fat or thin detection demand of visual angle.

Description

A kind of fat or thin detection method and mobile terminal
Technical field
The present invention relates to mobile communication technology field more particularly to a kind of fat or thin detection method and mobile terminals.
Background technique
Currently, the function of the mobile terminals such as smart phone, tablet computer is more and more stronger with the development of mobile communication technology Greatly.The more and more APP (application program, Application) installed on mobile terminal are not only brought more just to user Benefit, and can satisfy the demands such as the instant messaging of user, amusement.
Fat or thin is a kind of topic being widely noticed.BMI (body-mass index, Body Mass Index) is a kind of common The fat or thin degree of measurement human body and whether Jian Kang standard.When needs compare and analyze the weight of a people for different height People brought by health effect when, BMI value is a neutrality and reliable index.Currently, mobile terminal can be by APP User provides newest BMI value, so that user judges the fat or thin degree of oneself according to the BMI value of mobile terminal output.
Above-mentioned BMI value is only a neutrality and reliable health indicator, and in practical applications, there are also some for fat or thin Judgment criteria, for example, some standards can also visually judge that a people's is fat or thin.Therefore, existing APP is unable to satisfy this Standard and its corresponding user demand.
Summary of the invention
The embodiment of the present invention provides a kind of fat or thin detection method, to solve not expiring existing for existing fat or thin detection method The problem of sufficient visual standards and its corresponding user demand.
In a first aspect, the embodiment of the invention provides a kind of fat or thin detection method, applied to the mobile end with camera End, which comprises
Obtain the target facial image of the camera acquisition;
Extract the face characteristic information in the target facial image;
Based on the face characteristic information, fat or thin variability index is determined.
Second aspect, the embodiment of the invention also provides a kind of mobile terminal, including camera, the mobile terminal is also wrapped It includes:
Image collection module, for obtaining the target facial image of the camera acquisition;
Characteristic extracting module obtains the face characteristic letter in the target facial image of module output for extracting described image Breath;And
Fat or thin detection module, the face characteristic information for being exported based on the characteristic extracting module, determines fat or thin variation Index.
In this way, the fat or thin detection method and mobile terminal of the embodiment of the present invention, the face characteristic according to target facial image Information carries out the fat or thin detection of user, due to that can carry out fat or thin detection from visual angle, can satisfy user in vision The fat or thin detection demand of angle.
Also, curvature of the embodiment of the present invention based on face mask curve, determines fat or thin variability index, due to the fat or thin change Change the curvature that index is foundation face mask curve to obtain, which is able to reflect the shape of face of user, therefore can pass through shape of face Accurately detect the fat or thin of user's shape of face.Usually, the shape of face of the more big then user of the fat or thin variability index is thinner, alternatively, The shape of face of the smaller then user of the fat or thin variability index is more fat.Above-mentioned fat or thin variability index can for being compared between user, and In view of face's lines and the fat or thin direct strong association of figure, above-mentioned fat or thin variability index can also be as fat or thin effective of figure With reference to.
Believe in addition, the embodiment of the present invention changes according to the facial contour of the target facial contour and standard faces profile Breath, obtains the fat or thin variability index of user, since the fat or thin variability index can be according to target facial contour and standard faces wheel Wide comparison result obtains, which can reflect variation of the target facial contour relative to standard faces profile, Also it can objectively reflect the fat or thin variation degree of user's shape of face.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of flow chart of fat or thin detection method embodiment one of the invention;
Fig. 2 is a kind of flow chart of fat or thin detection method embodiment two of the invention;
Fig. 3 is the schematic diagram of of the invention a kind of target facial contour and face mask curve;
Fig. 4 is a kind of flow chart of fat or thin detection method embodiment three of the invention;
Fig. 5 is the schematic diagram of a kind of standard faces profile of the invention and its base position;
Fig. 6 is a kind of schematic diagram of the comparison procedure of target facial contour and standard faces profile of the invention;
Fig. 7 is a kind of structural block diagram of mobile terminal embodiment of the invention;
Fig. 8 is the structural block diagram of another mobile terminal embodiment of the invention;
Fig. 9 is the structural block diagram of another mobile terminal embodiment of the invention;
Figure 10 is a kind of structural block diagram of characteristic extracting module 703 of the invention;
Figure 11 is the structural block diagram of another mobile terminal embodiment of the invention;
Figure 12 is a kind of structural block diagram of fat or thin detection module 704 of the invention;
Figure 13 is the structural block diagram of another characteristic extracting module 703 of the invention;
Figure 14 is a kind of structural schematic diagram of mobile terminal 1500 of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment of the method one
Referring to Fig.1, a kind of flow chart of fat or thin detection method embodiment one of the invention is shown, is applied to that there is camera shooting The mobile terminal of head, can specifically include following steps:
Step 101, the target facial image for obtaining the camera acquisition.
Fat or thin detection method provided in an embodiment of the present invention can be applied to image correlation APP (application program, Application), entertain in the application programs such as correlation APP, and fat or thin detection method provided in an embodiment of the present invention can be applied to In client application environment corresponding with server, wherein client and server can be located in wired or wireless network, lead to The wired or wireless network is crossed, client and server carry out data interaction.
Specifically, client may operate on the mobile terminal with camera, and above-mentioned mobile terminal specifically can wrap Include but be not limited to: smart phone, tablet computer, E-book reader, MP3 (dynamic image expert's compression standard audio level 3, Moving Picture Experts Group Audio Layer III) player, MP4 (dynamic image expert's compression standard Audio level 4, Moving Picture Experts Group Audio Layer IV) player, portable calculating on knee Machine, vehicle-mounted computer, desktop computer, set-top box, intelligent TV set, wearable device etc..
The technology for being unable to satisfy visual standards and its corresponding user demand for the existing APP BMI standard used is asked Topic, the embodiment of the present invention has in view of face's lines and figure are fat or thin to be directly associated with by force, and much main mesh of weight-reducing user Be that face feels, therefore creatively propose: the fat or thin detection of user carried out for target facial image, due to can be from view Feel that angle carries out fat or thin detection, therefore can satisfy user in the fat or thin detection demand of visual angle.
In an alternative embodiment of the invention, (built-in or external), which can be set, in mobile terminal camera, then It can use camera acquisition target facial image.In addition, in order to improve detection accuracy target can be acquired in step 101 During facial image, shooting prompt relevant to shooting angle is issued.Above-mentioned shooting angle can specifically include: shooting is high Degree, shooting direction and shooting distance etc., wherein shooting direction is divided into positive angle, flank angle, oblique side angle degree, back angle Deng in an alternative embodiment of the invention, above-mentioned shooting angle can specifically include: user's face and the mobile terminal In parallel.In practical applications, usually the chin in target facial image is come to a point due to bowing to take pictures, and comes back and takes pictures usually So that the face in target facial image broadens, therefore the above-mentioned user's face shooting angle parallel with the mobile terminal can protect The authenticity of target facial image is demonstrate,proved, and then improves detection accuracy;In particular, above-mentioned user's face is parallel with the mobile terminal Shooting angle can provide consistent testing result to different user.Certainly, above-mentioned user's face and the mobile terminal are flat Capable shooting angle is intended only as alternative embodiment, in fact, using the fat of the embodiment of the present invention for same user During thin detection method, identical shooting angle also available consistent testing result is used for multiple times in front and back, for example, with Family habit is taken pictures using heads-down posture every time, then its each shooting angle can be 10 degree.
In another alternative embodiment of the invention, client can provide corresponding trigger condition, meet the touching The execution of step 101 can be triggered when clockwork spring part.Wherein, which can be with are as follows: receives preset triggering command, wherein The object of the triggering command can be preset key or preset interface etc., it will be understood that the embodiment of the present invention is for specific Trigger condition is without restriction.
Face characteristic information in step 102, the extraction target facial image.
Optionally, above-mentioned face characteristic information can specifically include: all or part of target facial contour etc..
In practical applications, target facial image can be handled with application image Processing Algorithm, to obtain target person Face profile.It is alternatively possible to obtain target facial contour using Face datection algorithm.Face datection refers to determining input picture In (the target facial image of the embodiment of the present invention) whether there is face, and in the presence of determine face position, size and position The process of appearance.Since Face datection is as a key technology in face information processing, become pattern-recognition and meter in recent years The project for comparing successful application has been obtained in calculation machine visual field, therefore, existing method for detecting human face can have been used here Facial image is obtained from target facial image.For example, above-mentioned method for detecting human face can specifically include: based on Haar feature Adaboost (adaptive enhancing, adaptive enhancement) detection method, navigates to face using the detection method Region, the outer boundary, that is, target facial contour in the region.Wherein, Haar feature is also known as Haar-like feature, can specifically be divided into Three classes: edge feature, linear character, central feature and diagonal line feature, Haar feature have been able to reflect the grey scale change of image Situation.It is appreciated that the embodiment of the present invention is without restriction for specific method for detecting human face.
Step 103 is based on the face characteristic information, determines fat or thin variability index.
In practical applications, client can pass through voice mode or the side UI (user interface, User Interface) Formula exports above-mentioned fat or thin variability index to user, it will be understood that specific output of the embodiment of the present invention for fat or thin variability index Mode is without restriction.
The fat or thin detection method of the embodiment of the present invention, the face characteristic information according to target facial image carry out the fat of user Thin detection since face characteristic information is able to reflect the fat or thin degree or fat or thin variation of user's face, therefore is enabled to from view Feel that angle carries out fat or thin detection to obtain corresponding fat or thin variability index, namely can satisfy user in the fat or thin inspection of visual angle Survey demand.
Embodiment of the method two
Referring to Fig. 2, a kind of flow chart of fat or thin detection method embodiment two of the invention is shown, is applied to that there is camera shooting The mobile terminal of head, can specifically include following steps:
Step 201, the target facial image for obtaining the camera acquisition.
In practical applications, when receiving the triggering command of user, it can use the camera acquisition mesh of mobile terminal Mark facial image.
Face mask curve of the face side from chin to ear-lobe in step 202, the extraction target facial image.
In practical applications, can be schemed using Adaboost detection method, edge detection method etc. based on Haar feature As Processing Algorithm, target facial contour is navigated to, optionally the target facial contour can seal to be corresponding with the shape of face of user Closed curve, such as elliptic curve, it will be understood that the embodiment of the present invention is without restriction for specific target facial contour.
Referring to Fig. 3, the schematic diagram of of the invention a kind of target facial contour and face mask curve is shown, wherein can To choose curved section positioned at face side, from chin to ear-lobe from closed target facial contour, taken turns as face Wide curve y.
Step 203 carries out second differential operation to the face mask curve, obtains the song of the face mask curve Degree.
A kind of determination process of the curvature of face mask curve y is provided herein, which can be to face mask song The corresponding curved section of line y does second differential operation, and obtained result can be a constant, which is face mask curve y's The change rate of tangent slope, the i.e. curvature of face mask curve y.Usually, the more big then shape of face of the curvature more " circle ", fat or thin change It is smaller to change index;Conversely, the smaller then shape of face of curvature gets over " point ", fat or thin variability index is bigger.
Step 204, the curvature based on the face mask curve, determine fat or thin variability index.
It in an alternative embodiment of the invention, can be using the inverse of the curvature of face mask curve y as fat or thin change Change index, alternatively, the inverse of the curvature of face mask curve y can be normalized in presetting range (such as [0,1]), and will return Inverse after one change is as fat or thin variability index, alternatively, (such as 1~10 grade, 1 grade right for the corresponding grade of preset fat or thin variability index The fat or thin variability index answered is that 0~0.1,2 grades of corresponding fat or thin variability indexes are 0.1~0.2 etc.).
It, can be according to the song of multiframe target facial image in preset time period in another alternative embodiment of the invention Degree, determines fat or thin variability index.For example, the curvature of multiframe target facial image in preset time period can be compared, with Obtain corresponding fat or thin variability index;Alternatively, phase can be drawn according to the curvature of multiframe target facial image in preset time period The tendency chart etc. answered.Wherein, which can be to terminate at the corresponding period at current time, and length can be by ability Field technique personnel determine according to time application demand, for example, the length can be one week, two weeks, one month etc..It is appreciated that The embodiment of the present invention is without restriction for the specific method of determination and embodiments mode of fat or thin variability index.
The fat or thin detection method of the embodiment of the present invention determines fat or thin variability index based on the curvature of face mask curve, by It in the fat or thin variability index is obtained according to the curvature of face mask curve, which is able to reflect the shape of face of user, therefore energy The fat or thin of user's shape of face is accurately enough detected by shape of face.Usually, the shape of face of the more big then user of the fat or thin variability index It is thinner, alternatively, the shape of face of the smaller then user of the fat or thin variability index is more fat.Above-mentioned fat or thin variability index can between user into Row compares, and in view of face's lines and the fat or thin direct strong association of figure, above-mentioned fat or thin variability index can also be used as figure Fat or thin effective reference.
Embodiment of the method three
Referring to Fig. 4, a kind of flow chart of fat or thin detection method embodiment three of the invention is shown, is applied to that there is camera shooting The mobile terminal of head, can specifically include following steps:
Step 401, the frame facial image for obtaining the camera acquisition, and be denoted as with reference to facial image;
In practical applications, when receiving the triggering command of user, it can use the camera acquisition mesh of mobile terminal Mark facial image.
In practical applications, can before obtaining target facial image, in advance obtain refer to facial image, wherein should The image that can be acquired for several days or a few months ago by camera with reference to facial image, or to old before user The image that photo is shot, such as older picture of the user N (N is more than or equal to 1) before year, the corresponding specific reference of the present invention Facial image and its acquisition modes are without restriction.
Step 402 carries out facial features localization with reference to facial image to described, extracts the seat of two eyes and nose respectively Mark, and it is denoted as the first datum mark, the second datum mark and third datum mark respectively.
For the first datum mark corresponding for eyes center and nose, the second datum mark and third datum mark, due to it Position be it is substantially stationary, substantially not with the fat or thin variation of user, therefore it can be used as with reference to facial image and target facial image Basis on location.
Step 403 is based on first datum mark, the second datum mark and third datum mark, extracts in the facial image Facial contour, and be denoted as standard faces profile.
In practical applications, can be schemed using Adaboost detection method, edge detection method etc. based on Haar feature As Processing Algorithm, above-mentioned standard facial contour is navigated to, optionally, which can be corresponding to the shape of face of user Closed curve, such as elliptic curve, it will be understood that the embodiment of the present invention does not limit specific standard faces profile System.
Referring to Fig. 5, the schematic diagram of a kind of standard faces profile of the invention and its base position is shown, it can be seen that The distance between eyes center and corresponding first datum mark of nose, the second datum mark and third datum mark be it is substantially stationary, Substantially not with the fat or thin variation of user.
Step 404, the target facial image for obtaining the camera acquisition.
In practical applications, when receiving the triggering command of user, it can use the camera acquisition mesh of mobile terminal Mark facial image.
In order to guarantee that target facial image corresponds to target facial contour standard faces profile corresponding with reference facial image The objectivity of comparison result and then guarantee the accuracy of fat or thin variability index, it in an alternative embodiment of the invention, can be with When the camera acquires facial image, first datum mark, the second datum mark and third are shown in preview interface of taking pictures Datum mark;Prompt information is generated, the prompt information is used to prompt two eyes and nose and first benchmark of photographer Point, the second datum mark and third datum mark are aligned respectively.Above-mentioned prompt information can guarantee target facial image and reference The center of facial image is aligned, therefore can guarantee the accuracy of fat or thin variability index.In practical applications, target can be shown The preview image of facial image is compared to preview image and with reference to 3 base positions of facial image, and foundation compares knot Fruit issues prompt accordingly, such as to the left/move right, upward/lower movement etc..
It in an alternative embodiment of the invention, can should after the target facial image for obtaining camera acquisition The first coordinate, the second coordinate and the third coordinate of target facial image are respectively and with reference to the first datum mark, the second datum mark and the Three datum marks are compared, and judge whether target facial image is aligned with the center with reference to facial image according to comparison result.
Step 405 carries out facial features localization to the target facial image, extracts the of two eyes and nose respectively One coordinate, the second coordinate and third coordinate.
First coordinate, the second coordinate and third coordinate respectively correspond the position of two eyes and nose of target facial image It sets.
Step 406, by first coordinate, the second coordinate and third coordinate respectively with first datum mark, the second base It is compared on schedule with third datum mark.
First datum mark, the second datum mark and third datum mark respectively correspond the two kinds of eyes and nose with reference to facial image Position.For the facial image of same user, 3 base positions in eyes center and nose are substantially stationary, bases This is not with the fat or thin variation of user, therefore above-mentioned comparison can guarantee target facial image with the center with reference to facial image to it.
If step 407, first coordinate, the second coordinate and third coordinate and first datum mark, the second datum mark It is aligned with third datum mark, then extracts the facial contour in the target facial image, and be denoted as target facial contour.
In an alternative embodiment of the invention, described to distinguish first coordinate, the second coordinate and third coordinate After the step of being compared with first datum mark, the second datum mark and third datum mark, the method for the embodiment of the present invention If can also include: at least one in first coordinate, the second coordinate and third coordinate and first datum mark, second Datum mark and third datum mark misalignment then carry out image procossing to the target facial image, generate intermediate facial image;It mentions The facial contour in the intermediate facial image is taken, and is denoted as target facial contour;Wherein, the institute in the intermediate facial image The first coordinate, the second coordinate and third coordinate is stated to be aligned with first datum mark, the second datum mark and third datum mark.It can Selection of land, can be according to the first datum mark, the second datum mark and third datum mark of target facial image, to the target face figure As carrying out above-mentioned image procossing;For example, above-mentioned image procossing may include: the processing such as scaling, rotation, finally make image procossing The first coordinate in intermediate facial image, the second coordinate and third coordinate afterwards and above-mentioned first datum mark, the second datum mark and Third datum mark is aligned.It is appreciated that the embodiment of the present invention is without restriction for specific pretreatment mode.
The target facial contour and the standard faces profile are compared step 408, obtain facial contour variation Information.
In practical applications, facial contour change information can specifically include: contour area change information and profile and border Change information etc..Wherein, above-mentioned contour area change information can be using target facial contour and two kinds of standard faces profile wheels The comparison result of wide area is measured.Since the fat or thin variation of user would generally be embodied on target facial contour, when becoming fat Then target facial contour extends out, and then target facial contour inside contracts when reducing, therefore can by the contour area change information of the two Objectively to reflect the fat or thin variation degree of user's shape of face.Usually, if the area of target facial contour is greater than standard faces The area of profile then illustrates that user becomes fat, if the area of target facial contour is less than the area of standard faces profile, illustrates User reduces, alternatively, illustrating user substantially not if the area of target facial contour is close with the area of standard faces profile Fat or thin variation occurs.In practical applications, target facial contour and two kinds of standard faces profile can be calculated using various algorithms Profile corresponds to the area of closed figure, and the embodiment of the present invention does not limit the specific acquisition modes of the area of two kinds of profiles System.
Referring to Fig. 6, the boundary comparison procedure of a kind of target facial contour and standard faces profile of the invention is shown Schematic diagram, wherein, can be with when the boundary point of target facial contour 602 is located at the inside of extraneous point of standard faces profile 601 Think that target facial contour inside contracts namely user reduces;When the boundary point of target facial contour 603 is located at standard faces profile When the outside of 601 extraneous point, then illustrate that target facial contour extends out namely user becomes fat.
Step 409 is based on the facial contour change information, determines the fat or thin variability index.
It in practical applications, can the knot compared with the area of standard faces profile of the area based on target facial contour The comparison result on the boundary of fruit or target facial contour and standard faces profile, determines fat or thin variability index.Specifically, exist The boundary of area or target facial contour that the area of target facial contour is greater than standard faces profile exceeds standard faces wheel When the boundary of exterior feature, corresponding fat or thin variability index can be " becoming fat ", correspondingly, can export corresponding interactive information, Such as " becoming fat, start weight-reducing " or " becoming fat, do more physical exercises ".Similarly, it is less than standard in the area of target facial contour When the area of facial contour or the boundary of target facial contour are less than the boundary of standard faces profile, corresponding fat or thin variation Index can be " reducing ", correspondingly, can export corresponding interactive information, such as " losing weight successfully, redouble one's efforts ".Separately Outside, the area in the area of target facial contour and standard faces profile is close or boundary and the standard of target facial contour When the boundary of facial contour is close, corresponding fat or thin variability index correspondingly can export correspondence for " not changing " Interactive information, such as " figure is kept as well, redoubling one's efforts ".It is appreciated that the embodiment of the present invention is to specific fat or thin Variability index and corresponding interactive information are without restriction.
The fat or thin detection method of the embodiment of the present invention, the face wheel according to the target facial contour and standard faces profile Wide change information obtains the fat or thin variability index of user, since the fat or thin variability index can be according to target facial contour and mark The comparison result of quasi- facial contour obtains, which can reflect target facial contour relative to standard faces profile Variation, also can objectively reflect the fat or thin variation degree of user's shape of face.And in view of face's lines and figure are fat or thin Directly strong association, above-mentioned fat or thin variability index can also be as effective reference of the fat or thin variation of figure.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Terminal first embodiment
Referring to Fig. 7, a kind of structural block diagram of mobile terminal embodiment of the present invention is shown, can specifically include such as lower die Block: camera 701, image collection module 702, characteristic extracting module 703 and fat or thin detection module 704;Wherein,
Above-mentioned image collection module 702, the target facial image acquired for obtaining the camera 701;
Features described above extraction module 703 obtains in the target facial image that module 702 exports for extracting described image Face characteristic information;
Above-mentioned fat or thin detection module 704, the face characteristic information for being exported based on the characteristic extracting module 703, really Fixed fat or thin variability index.
Optionally, referring to Fig. 8, on the basis of Fig. 7, the mobile terminal can also include:
With reference to module 705 is obtained, for extracting the face in the target facial image in the characteristic extracting module 703 Before characteristic information, a frame facial image of the camera acquisition is obtained, and is denoted as with reference to facial image;
Face detection module 706, for carrying out face characteristic with reference to the reference facial image for obtaining module output to described Detection, extracts the coordinate of two eyes and nose respectively, and is denoted as the first datum mark, the second datum mark and third benchmark respectively Point;
Profile extraction module 707, the first datum mark, the second datum mark for being exported based on the face detection module and Third datum mark extracts the facial contour in the facial image, and is denoted as standard faces profile.
Optionally, referring to Fig. 9, on the basis of Fig. 8, the mobile terminal can also include:
Benchmark display module 708, for obtaining the target face figure that module obtains the camera acquisition in described image Before picture, when the camera acquires facial image, first datum mark, the second datum mark are shown in preview interface of taking pictures With third datum mark;
Cue module 709, for generating prompt information, the prompt information is used to prompt two eyes and nose of photographer It is sub to be aligned respectively with first datum mark, the second datum mark and third datum mark.
Optionally, the structural block diagram of characteristic extracting module 703 is as shown in Figure 10, and characteristic extracting module 703 specifically can wrap Include following module:
Feature detection sub-module 731 extracts two for carrying out facial features localization to the target facial image respectively First coordinate of eyes and nose, the second coordinate and third coordinate;
Benchmark compares submodule 732, the first coordinate, the second coordinate for exporting the feature detection sub-module and the Three coordinates are compared with first datum mark, the second datum mark and third datum mark respectively;
Contours extract submodule 733 determines the first coordinate, the second coordinate and third for comparing submodule in the benchmark When coordinate is aligned with first datum mark, the second datum mark and third datum mark, extract in the target facial image Facial contour, and it is denoted as target facial contour.
Optionally, referring to Fig.1 1, on the basis of Fig. 9 or Figure 10, the mobile terminal can also include:
Image processing module 710 determines the first coordinate, the second coordinate and for comparing submodule 732 in the benchmark When at least one in three coordinates is with first datum mark, the second datum mark and third datum mark misalignment, to the target Facial image carries out image procossing, generates intermediate facial image;
Contour extraction of objects module 711, the people in intermediate facial image for extracting the output of described image processing module Face profile, and it is denoted as target facial contour;
Wherein, first coordinate, the second coordinate and the third coordinate in the intermediate facial image and first base On schedule, the second datum mark and third datum mark are aligned.
Optionally, the structural block diagram of fat or thin detection module 704 is as shown in figure 12, and fat or thin detection module 704 specifically can wrap Include following module:
Profile compares submodule 741 and obtains for the target facial contour and the standard faces profile to be compared To facial contour change information;
Index determines submodule 742, for comparing the facial contour change information of submodule output based on the profile, really The fixed fat or thin variability index.
Optionally, the structural block diagram of characteristic extracting module 703 is as shown in figure 13, and characteristic extracting module 703 specifically can wrap Include following module:
Curve extracting sub-module 734, for extracting face side in the target facial image from chin to ear-lobe Face mask curve;
It differentiates submodule 735, the face mask curve for exporting to the curve extracting sub-module carries out secondary It differentiates, obtains the curvature of the face mask curve.
Optionally, on the basis of Figure 13, fat or thin detection module 704 can specifically include: curvature determines submodule, is used for Based on the curvature of the face mask curve, fat or thin variability index is determined.
The mobile terminal of the embodiment of the present invention carries out the fat or thin inspection of user according to the face characteristic information of target facial image It surveys, since face characteristic information is able to reflect the fat or thin degree or fat or thin variation of user's face, therefore enables to from vision angle It spends and carries out fat or thin detection to obtain corresponding fat or thin variability index, namely can satisfy user to need in the fat or thin detection of visual angle It asks.
Terminal second embodiment
Referring to Fig.1 4, a kind of structural schematic diagram of mobile terminal 1500 of the present invention is shown, can specifically include: at least one A processor 1501, memory 1502, at least one network interface 1504 and user interface 1503.It is each in mobile terminal 1500 A component is coupled by bus system 1505.It is understood that bus system 1505 is for realizing the company between these components Connect letter.Bus system 1505 further includes power bus, control bus and status signal bus in addition in addition to including data/address bus. But for the sake of clear explanation, various buses are all designated as bus system 1505 in Figure 10, mobile terminal 1500 further includes It takes pictures component 1506, component 1506 of taking pictures includes camera.
Wherein, user interface 1503 may include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memory 1502 in the embodiment of the present invention can be volatile memory or non-volatile memories Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), erasable programmable are read-only Memory (ErasablePROM, EPROM), electrically erasable programmable read-only memory (ElectricallyEPROM, EEPROM) Or flash memory.Volatile memory can be random access memory (RandomAccessMemory, RAM), be used as external high Speed caching.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory (StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory (SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory (DirectRambusRAM, DRRAM).The memory 1502 of the system and method for description of the embodiment of the present invention is intended to include but not It is limited to the memory of these and any other suitable type.
In some embodiments, memory 1502 stores following element, executable modules or data structures, or Their subset of person or their superset: operating system 15021 and application program 15022.
Wherein, operating system 15021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for Realize various basic businesses and the hardware based task of processing.Application program 15022 includes various application programs, such as matchmaker Body player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention The program of method may be embodied in application program 15022.
In embodiments of the present invention, by the program or instruction of calling memory 1502 to store, specifically, can be application The program or instruction stored in program 15022, processor 1501 are used to obtain the target facial image of the camera acquisition;It mentions Take the face characteristic information in the target facial image;Based on the face characteristic information, fat or thin variability index is determined.
The method that the embodiments of the present invention disclose can be applied in processor 1501, or real by processor 1501 It is existing.Processor 1501 may be a kind of IC chip, the processing capacity with signal.During realization, the above method Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 1501 or software form.Above-mentioned Processor 1501 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), dedicated collection At circuit (ApplicationSpecificIntegratedCircuit, ASIC), ready-made programmable gate array (FieldProgrammableGateArray, FPGA) either other programmable logic device, discrete gate or transistor logic Device, discrete hardware components.It may be implemented or execute disclosed each method, step and the logical box in the embodiment of the present invention Figure.General processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with the present invention The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and execute completion, or use decoding processor In hardware and software module combination execute completion.Software module can be located at random access memory, and flash memory, read-only memory can In the storage medium of this fields such as program read-only memory or electrically erasable programmable memory, register maturation.The storage Medium is located at memory 1502, and processor 1501 reads the information in memory 1502, completes the above method in conjunction with its hardware Step.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware, Microcode or combinations thereof is realized.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (App LicationSpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD), field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for executing herein described function In member or combinations thereof.
For software implementations, the module (such as process, function etc.) of function described in the execution embodiment of the present invention can be passed through To realize technology described in the embodiment of the present invention.Software code is storable in memory and is executed by processor.Storage Device can in the processor or portion realizes outside the processor.
Optionally, processor 1501 is also used to: being obtained a frame facial image of the camera acquisition, and is denoted as reference man Face image;Facial features localization is carried out with reference to facial image to described, extracts the coordinate of two eyes and nose respectively, and respectively It is denoted as the first datum mark, the second datum mark and third datum mark;Based on first datum mark, the second datum mark and third benchmark Point extracts the facial contour in the facial image, and is denoted as standard faces profile.
Optionally, processor 1501 is also used to: when the camera acquires facial image, taking pictures, preview interface is shown First datum mark, the second datum mark and third datum mark;Prompt information is generated, the prompt information is for prompting photographer Two eyes and nose be aligned respectively with first datum mark, the second datum mark and third datum mark.
Optionally, processor 1501 is also used to: being carried out facial features localization to the target facial image, is extracted two respectively The first coordinate, the second coordinate and the third coordinate of eyes and nose;By first coordinate, the second coordinate and third coordinate point It is not compared with first datum mark, the second datum mark and third datum mark;If first coordinate, the second coordinate and Three coordinates are aligned with first datum mark, the second datum mark and third datum mark, then are extracted in the target facial image Facial contour, and be denoted as target facial contour.
Optionally, processor 1501 is also used to: if at least one in first coordinate, the second coordinate and third coordinate With first datum mark, the second datum mark and third datum mark misalignment, then the target facial image is carried out at image Reason generates intermediate facial image;The facial contour in the intermediate facial image is extracted, and is denoted as target facial contour;Wherein, First coordinate, the second coordinate and third coordinate and first datum mark, the second benchmark in the intermediate facial image Point and third datum mark are aligned.
Optionally, processor 1501 is also used to: the target facial contour and the standard faces profile are compared, Obtain facial contour change information;Based on the facial contour change information, the fat or thin variability index is determined.
Optionally, processor 1501 is also used to: extracting the face side in the target facial image from chin to ear-lobe Face mask curve;Second differential operation is carried out to the face mask curve, obtains the curvature of the face mask curve.
Optionally, processor 1501 is also used to: the curvature based on the face mask curve determines fat or thin variability index.
The mobile terminal of the embodiment of the present invention carries out the fat or thin inspection of user according to the face characteristic information of target facial image It surveys, since face characteristic information is able to reflect the fat or thin degree or fat or thin variation of user's face, therefore enables to from vision angle It spends and carries out fat or thin detection to obtain corresponding fat or thin variability index, namely can satisfy user to need in the fat or thin detection of visual angle It asks.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
Those of ordinary skill in the art may be aware that the embodiment in conjunction with disclosed in the embodiment of the present invention describe it is each Exemplary unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered Think beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (12)

1. a kind of fat or thin detection method, applied to the mobile terminal with camera, which is characterized in that the described method includes:
Obtain the target facial image of the camera acquisition;
Extract the face characteristic information in the target facial image;
Based on the face characteristic information, fat or thin variability index is determined;
Wherein, the step of face characteristic information extracted in the target facial image, comprising:
Extract face mask curve of the face side in the target facial image from chin to ear-lobe;
Second differential operation is carried out to the face mask curve, obtains the curvature of the face mask curve;
It is described to be based on the face characteristic information, the step of determining fat or thin variability index, comprising:
Using the inverse of the curvature of the face mask curve as fat or thin variability index, alternatively, by the curvature of face mask curve Inverse be normalized in presetting range, and using the inverse after normalization as fat or thin variability index.
2. the method according to claim 1, wherein the face characteristic extracted in the target facial image Before the step of information, the method also includes:
A frame facial image of the camera acquisition is obtained, and is denoted as with reference to facial image;
Facial features localization is carried out with reference to facial image to described, extracts the coordinate of two eyes and nose respectively, and remember respectively For the first datum mark, the second datum mark and third datum mark;
Based on first datum mark, the second datum mark and third datum mark, the facial contour in the facial image is extracted, and It is denoted as standard faces profile.
3. according to the method described in claim 2, it is characterized in that, the target facial image for obtaining the camera acquisition The step of before, the method also includes:
The camera acquire facial image when, preview interface of taking pictures show first datum mark, the second datum mark and Third datum mark;
Generate prompt information, the prompt information be used to prompt photographer two eyes and nose and first datum mark, Second datum mark and third datum mark are aligned respectively.
4. according to the method described in claim 2, it is characterized in that, the face characteristic extracted in the target facial image The step of information, comprising:
Facial features localization is carried out to the target facial image, respectively the first coordinate of two eyes of extraction and nose, second Coordinate and third coordinate;
By first coordinate, the second coordinate and third coordinate respectively with first datum mark, the second datum mark and third base It is compared on schedule;
If first coordinate, the second coordinate and third coordinate and first datum mark, the second datum mark and third datum mark It is aligned, then extracts the facial contour in the target facial image, and be denoted as target facial contour.
5. according to the method described in claim 4, it is characterized in that, described sit first coordinate, the second coordinate and third After the step of mark is compared with first datum mark, the second datum mark and third datum mark respectively, the method is also wrapped It includes:
If at least one in first coordinate, the second coordinate and third coordinate and first datum mark, the second datum mark With third datum mark misalignment, then image procossing is carried out to the target facial image, generates intermediate facial image;
The facial contour in the intermediate facial image is extracted, and is denoted as target facial contour;
Wherein, first coordinate, the second coordinate and the third coordinate in the intermediate facial image and first datum mark, Second datum mark and third datum mark are aligned.
6. method according to claim 4 or 5, which is characterized in that it is described to be based on the face characteristic information, it determines fat or thin The step of variability index, comprising:
The target facial contour and the standard faces profile are compared, facial contour change information is obtained;
Based on the facial contour change information, the fat or thin variability index is determined.
7. a kind of mobile terminal, including camera, which is characterized in that the mobile terminal further include:
Image collection module, for obtaining the target facial image of the camera acquisition;
Characteristic extracting module obtains the face characteristic information in the target facial image of module output for extracting described image; And
Fat or thin detection module, the face characteristic information for being exported based on the characteristic extracting module, determines fat or thin variability index;
Wherein, the characteristic extracting module includes:
Curve extracting sub-module, for extracting face mask of the face side in the target facial image from chin to ear-lobe Curve;
It differentiates submodule, the face mask curve for exporting to the curve extracting sub-module carries out second differential fortune It calculates, obtains the curvature of the face mask curve;
The fat or thin detection module, for using the inverse of the curvature of the face mask curve as fat or thin variability index, alternatively, The inverse of the curvature of face mask curve is normalized in presetting range, and is referred to the inverse after normalization as fat or thin variation Number.
8. mobile terminal according to claim 7, which is characterized in that the mobile terminal further include:
With reference to obtain module, for extracted in the characteristic extracting module face characteristic information in the target facial image it Before, a frame facial image of the camera acquisition is obtained, and be denoted as with reference to facial image;
Face detection module, for carrying out facial features localization with reference to the reference facial image for obtaining module output to described, point The coordinate of two eyes and nose is indescribably taken, and is denoted as the first datum mark, the second datum mark and third datum mark respectively;
Profile extraction module, the first datum mark, the second datum mark and third base for being exported based on the face detection module On schedule, the facial contour in the facial image is extracted, and is denoted as standard faces profile.
9. mobile terminal according to claim 8, which is characterized in that the mobile terminal further include:
Benchmark display module, for before the target facial image that described image obtains that module obtains camera acquisition, When the camera acquires facial image, first datum mark, the second datum mark and third are shown in preview interface of taking pictures Datum mark;
Cue module, for generating prompt information, the prompt information is used to prompt two eyes and nose and the institute of photographer The first datum mark, the second datum mark and third datum mark is stated to be aligned respectively.
10. mobile terminal according to claim 8, which is characterized in that the characteristic extracting module includes:
Feature detection sub-module, for carrying out facial features localization to the target facial image, extract respectively two eyes and The first coordinate, the second coordinate and the third coordinate of nose;
Benchmark compares submodule, the first coordinate, the second coordinate and third coordinate for exporting the feature detection sub-module It is compared respectively with first datum mark, the second datum mark and third datum mark;
Contours extract submodule, for the benchmark compare submodule determine the first coordinate, the second coordinate and third coordinate with When first datum mark, the second datum mark and third datum mark are aligned, the face wheel in the target facial image is extracted Exterior feature, and it is denoted as target facial contour.
11. mobile terminal according to claim 10, which is characterized in that the mobile terminal further include:
Image processing module determines in the first coordinate, the second coordinate and third coordinate extremely for comparing submodule in the benchmark Rare one with first datum mark, the second datum mark and when third datum mark misalignment, to the target facial image into Row image procossing generates intermediate facial image;
Contour extraction of objects module, the facial contour in intermediate facial image for extracting the output of described image processing module, And it is denoted as target facial contour;
Wherein, first coordinate, the second coordinate and the third coordinate in the intermediate facial image and first datum mark, Second datum mark and third datum mark are aligned.
12. mobile terminal described in 0 or 11 according to claim 1, which is characterized in that the fat or thin detection module includes:
Profile compares submodule and obtains face for the target facial contour and the standard faces profile to be compared Profile variations information;
Index determines submodule, the facial contour change information for being exported based on profile comparison submodule, described in determination Fat or thin variability index.
CN201610609563.XA 2016-07-28 2016-07-28 A kind of fat or thin detection method and mobile terminal Active CN106295520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610609563.XA CN106295520B (en) 2016-07-28 2016-07-28 A kind of fat or thin detection method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610609563.XA CN106295520B (en) 2016-07-28 2016-07-28 A kind of fat or thin detection method and mobile terminal

Publications (2)

Publication Number Publication Date
CN106295520A CN106295520A (en) 2017-01-04
CN106295520B true CN106295520B (en) 2019-10-18

Family

ID=57662761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610609563.XA Active CN106295520B (en) 2016-07-28 2016-07-28 A kind of fat or thin detection method and mobile terminal

Country Status (1)

Country Link
CN (1) CN106295520B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423716A (en) * 2017-07-31 2017-12-01 广东欧珀移动通信有限公司 Face method for monitoring state and device
CN114449312A (en) * 2020-11-04 2022-05-06 深圳Tcl新技术有限公司 Video playing control method and device, terminal equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339612B (en) * 2008-08-19 2010-06-16 陈建峰 Face contour checking and classification method
CN102567716B (en) * 2011-12-19 2014-05-28 中山爱科数字科技股份有限公司 Face synthetic system and implementation method
CN102663413B (en) * 2012-03-09 2013-11-27 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN102722246A (en) * 2012-05-30 2012-10-10 南京邮电大学 Human face information recognition-based virtual pet emotion expression method
CN102801646B (en) * 2012-07-25 2016-05-04 上海量明科技发展有限公司 By JICQ compare operation method, client and system

Also Published As

Publication number Publication date
CN106295520A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN102830797B (en) A kind of man-machine interaction method based on sight line judgement and system
US11074436B1 (en) Method and apparatus for face recognition
CN102547123B (en) Self-adapting sightline tracking system and method based on face recognition technology
CN103718175B (en) Detect equipment, method and the medium of subject poses
CN102081503B (en) Electronic reader capable of automatically turning pages based on eye tracking and method thereof
CN105933607B (en) A kind of take pictures effect method of adjustment and the mobile terminal of mobile terminal
US9913578B2 (en) Eye gaze detecting device and eye gaze detection method
CA2782071C (en) Liveness detection
CN106603928B (en) A kind of image pickup method and mobile terminal
CN109375765B (en) Eyeball tracking interaction method and device
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
CN110476141A (en) Sight tracing and user terminal for executing this method
RU2020101280A (en) COMPUTER IMPLEMENTED METHOD AND COMPUTER SOFTWARE PRODUCT TO CONTROL ACCESS TO TERMINAL DEVICE
US10248231B2 (en) Electronic device with fingerprint detection
CN103824072B (en) Method and device for detecting font structure of handwriting character
CN105740780A (en) Method and device for human face in-vivo detection
CN102236412A (en) Three-dimensional gesture recognition system and vision-based gesture recognition method
CN106055446A (en) Mobile terminal test method and apparatus
CN108027656A (en) Input equipment, input method and program
CN107422844B (en) Information processing method and electronic equipment
CN103946887B (en) Gaze position estimation system, gaze position estimation device and control method for gaze position estimation system and device
CN106295520B (en) A kind of fat or thin detection method and mobile terminal
CN106406708A (en) A display method and a mobile terminal
CN112101123A (en) Attention detection method and device
CN108829239A (en) Control method, device and the terminal of terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210913

Address after: 710077 Floor 9, block G4, HUanPu Science Park, No. 211, Tiangu 8th Road, high tech Zone, Xi'an, Shaanxi Province

Patentee after: Xi'an Weiwo Software Technology Co.,Ltd.

Address before: 523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong.

Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd.