CN108416253A - Avoirdupois monitoring method, system and mobile terminal based on facial image - Google Patents

Avoirdupois monitoring method, system and mobile terminal based on facial image Download PDF

Info

Publication number
CN108416253A
CN108416253A CN201810045686.4A CN201810045686A CN108416253A CN 108416253 A CN108416253 A CN 108416253A CN 201810045686 A CN201810045686 A CN 201810045686A CN 108416253 A CN108416253 A CN 108416253A
Authority
CN
China
Prior art keywords
facial image
target
weight
preset
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810045686.4A
Other languages
Chinese (zh)
Inventor
刘晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Shenzhen Tinno Mobile Technology Co Ltd
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tinno Mobile Technology Co Ltd, Shenzhen Tinno Wireless Technology Co Ltd filed Critical Shenzhen Tinno Mobile Technology Co Ltd
Priority to CN201810045686.4A priority Critical patent/CN108416253A/en
Publication of CN108416253A publication Critical patent/CN108416253A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention is a kind of Avoirdupois monitoring method, system and mobile terminal based on facial image, is included the following steps:Obtain a target facial image;Extract the facial feature data of the target facial image;According to the similarity comparison result between the facial feature data in the facial feature data and preset facial image sample of the target facial image, the weight of discribed personage in the target facial image is analyzed;And when judging that the weight of the personage is more than a setting value with the difference of preset human body standard weight data, send out a prompt message.To judge the trend and the estimation existing weight of user of user's weight, the weight physical condition for reminding user present, measurement method is easy, is automatically finished, and improves the convenience for measuring operation, improves the user experience of mobile terminal.

Description

Avoirdupois monitoring method, system and mobile terminal based on facial image
Technical field
The present invention relates to measured body weight technical fields, and in particular to a kind of Avoirdupois monitoring method based on facial image is System and mobile terminal.
Background technology
With the raising of people's quality of the life, people increasingly pay attention to health situation, for the moulding of body Increasingly higher demands are proposed, the development of last century the mid-50 electronic technology pushes these surveys such as scale, Human fat balance The rapid development of measuring appratus.But traditional mechanical type measuring apparatus is not readily portable and intelligence degree is not high, therefore, into It is the following development trend measured that the measurement method of one step research human body master data, which has very real meaning, intelligence,.
Invention content
For the above-mentioned prior art the shortcomings that, the present invention provide a kind of Avoirdupois monitoring method, system based on facial image And mobile terminal, by the way that the information of user's weight is mapped with facial image, according to reading user's facial image Face feature information, to judge the trend and the estimation existing weight of user of user's weight, the body for reminding user present Weight physical condition, solves the problems, such as to be unable to intelligent measuring body weight in the prior art.
Following technical scheme realization can be used with its technical problem is solved in the purpose of the present invention.
A kind of Avoirdupois monitoring method based on facial image proposed by the present invention, includes the following steps:Obtain a target person Face image;Extract the facial feature data of the target facial image;Facial feature data according to the target facial image With the similarity comparison result between the facial feature data in preset facial image sample, the target face figure is analyzed The weight of discribed personage as in;And the difference when the weight and preset human body standard weight data for judging the personage When value is more than a setting value, a prompt message is sent out.
Wherein, the facial feature data according to the target facial image and the face in preset facial image sample Similarity comparison result between portion's characteristic analyzes in the target facial image before the weight of discribed personage Including:Judge whether the resolution ratio of the target facial image is more than preset resolution ratio;If the resolution of the target facial image Rate is less than the preset resolution ratio, then extracts the facial feature data of the target facial image;If the target facial image Resolution ratio be more than the preset resolution ratio, then extract the characteristic in each key position region in the target facial image.
Wherein, each key position region includes:Forehead region, brow region, eye areas, nasal area, mouth Bar region, chin area, face contour region.
Wherein, further include:By preset depth convolutional neural networks, the facial characteristics of the target facial image is extracted Data.
Wherein, further include:The target facial image under most different weights is obtained, and different by described most Weight and the correspondence of the target facial image are established in the facial image sample;Judge facial image to be monitored Facial feature data and the facial image sample in facial feature data between cosine similarity value;In the face The target facial image that the cosine similarity value is more than preset critical value is chosen in image pattern, and according to the correspondence The weight of discribed personage in the target facial image is precipitated in relation.
The purpose of the present invention with solve its technical problem and following technical measures also can be used further to realize.
A kind of Avoirdupois monitoring system based on facial image proposed according to the present invention, including:Acquisition module obtains a mesh Mark facial image;Extraction module extracts the facial feature data of the target facial image;Analysis module, according to the target The similarity between facial feature data in the facial feature data of facial image and preset facial image sample compares knot Fruit analyzes the weight of discribed personage in the target facial image;And reminding module, when judging the personage's When the difference of weight and preset human body standard weight data is more than a setting value, a prompt message is sent out.
Wherein, further include:Judgment module, judges whether the resolution ratio of the target facial image is more than preset resolution ratio; The extraction module is specifically used for, if the resolution ratio of the target facial image be less than the preset resolution ratio, extraction described in The facial feature data of target facial image;If the resolution ratio of the target facial image is more than the preset resolution ratio, carry Take the characteristic in each key position region in the target facial image.
Wherein, further include:Each key position region includes:Forehead region, brow region, eye areas, nose Region, face region, chin area, face contour region.
A kind of mobile terminal, including:At least one processor;Memory;And be stored on the memory based on people The Avoirdupois monitoring program of face image is realized as follows when the Avoirdupois monitoring program based on facial image is run by the processor Operation:Obtain a target facial image;Extract the facial feature data of the target facial image;According to the target face figure Similarity comparison result between the facial feature data of picture and the facial feature data of preset facial image sample, analyzes The weight of discribed personage in the target facial image;And when the weight and preset human body mark for judging the personage When the difference of quasi- weight data is more than a setting value, a prompt message is sent out.
A kind of computer readable storage medium is stored with the body based on facial image on the computer readable storage medium Weight monitoring program, realizes any one of them among the above when the Avoirdupois monitoring program based on facial image is run by processor The step of method.
By above technical scheme and measure, the present invention by by the information of user's weight it is corresponding with facial image rise Come, according to the face feature information for reading user's facial image, trend and estimation user to judge user's weight show Some weight remind the present weight physical condition of user, and measurement method is easy, is automatically finished, assisted without other people or It is measured using entity measuring equipment, it is more intelligent, the convenience for measuring operation is improved, the user of mobile terminal is improved Experience.
Description of the drawings
Figure 1A to Figure 1B is a kind of module diagram of Avoirdupois monitoring system based on facial image proposed by the present invention.
Fig. 2 is the flow diagram of the Avoirdupois monitoring method based on facial image in first embodiment of the invention.
Fig. 3 is the flow diagram of the Avoirdupois monitoring method based on facial image in second embodiment of the invention.
Fig. 4 is a kind of embodiment schematic diagram of Avoirdupois monitoring method based on facial image proposed by the present invention.
Fig. 5 is a kind of structural schematic diagram of mobile terminal proposed by the present invention.
Specific implementation mode
For further illustrate the present invention a kind of Avoirdupois monitoring method, system and mobile terminal based on facial image be up to The technological means taken at predetermined goal of the invention and its effect of reach, below in conjunction with attached drawing and preferred embodiment, to foundation Avoirdupois monitoring method, specific implementation mode, structure, the feature of system and mobile terminal proposed by the present invention based on facial image And its effect, it is described in detail.
Figure 1A to Figure 1B is please referred to, is a kind of module of the Avoirdupois monitoring system based on facial image proposed by the present invention Schematic diagram.
In Figure 1A to Figure 1B, a kind of module signal of the Avoirdupois monitoring system based on facial image proposed by the present invention Figure, is suitable for mobile terminal, which includes:Acquisition module 110 obtains a target facial image;Extraction module 120, Extract the facial feature data of the target facial image;Analysis module 130, the facial characteristics according to the target facial image Similarity comparison result between data and the facial feature data of preset facial image sample analyzes the target face The weight of discribed personage in image;And reminding module 140, when the weight and preset human body mark for judging the personage When the difference of quasi- weight data is more than a setting value, a prompt message is sent out.
In one embodiment, further include:The preset facial image sample can be stored in the storage built in mobile terminal In module, it can also be stored on cloud server.
In one embodiment, further include:User in advance in the terminal selection survey weight pattern, as requested into Row self-timer so that the face feature information of user is completely recorded in this inside of taking a picture certainly;Meanwhile weight on user station Scale measures weight at this time, at this point, the weight and the information of user's photo set up an one-to-one relationship.
User is again introduced into survey body weight models.User as requested self-timer obtain user face feature information. Face feature information and the corresponding information of reference weight are compared, the database according to behind provides a supposition weight. Inform user's weight is how much to increase how many or reduction, and present weight is how many.
If at this point, user selects calibration weight, the data drawn by scale are inputted.Then carried out according to artificial intelligence Self study will measure the actual weight come and speculate that weight is compared, is modified.Weight direction will be speculated by adjusting database Actual weight is drawn close.
In one embodiment, further include:The prompt message includes but not limited to:Information of voice prompt, picture cues letter Breath, animation prompt message etc..For example, when the weight for analyzing tester exceeds standard weight, then judge that tester is overweight, in turn Voice reminder tester is overweight and overweight quantity;If being less than standard weight, judge that tester is partially thin, and then voice reminder is surveyed Examination person is partially thin, it is proposed that tester has additional nutrients, and provides the quantity of suggestion weightening.
In one embodiment, further include:Judgment module 150, judges whether the resolution ratio of the target facial image is more than Preset resolution ratio;The extraction module 120 is specifically used for, if the resolution ratio of the target facial image is less than the preset resolution Rate then extracts the facial feature data of the target facial image;If the resolution ratio of the target facial image is more than described pre- Resolution ratio is set, then extracts the characteristic in each key position region in the target facial image.
Foregoing description is illustrated below, it is specific as follows:
If each key position region includes eye areas and face region in target facial image, preset number is 20, Then eye areas corresponds to 20 target facial image samples, and face region corresponds to 20 target facial image samples, for the ease of Illustrate, if the corresponding 20 target image samples of eye areas are first sample group, corresponding 20 target images in face region Sample is the second sample group, first respectively to personage in each target facial image sample in first sample group and the second sample group Weight carries out quantity statistics, in first sample group, if the target facial image sample size that the weight of personage is 100 jin is most More, then the target weight of eye areas is 100 jin, in the second sample group, if the target image sample that the weight of personage is 120 jin This quantity is most, then the target weight in face region is also 120 jin, and the weight of 11 personages is then chosen in first sample group For 100 jin of similar sample A, and the similar sample B that the weight of 12 personages is 120 jin is chosen in the second sample group, so Cosine similarity value between 11 similar sample A and eye areas is extracted respectively afterwards as target cosine similarity value, first 11 target cosine similarity values are extracted in sample group altogether, a similar sample A corresponds to a mesh wherein in first sample group Cosine similarity value is marked, such eye areas corresponds to 11 target cosine similarity values;Extract respectively 12 similar sample Bs with Cosine similarity value between face region extracts more than 12 targets in the second sample group altogether as target cosine similarity value String similarity value, wherein a similar sample B corresponds to a target cosine similarity value, such face region in the second sample group Corresponding 12 target cosine similarity values finally calculate the average value of the corresponding 11 targets cosine similarity value of eye areas, The average value that eye areas calculates is value 1;Calculate the average value of the corresponding 12 targets cosine similarity value in face region, face The average value that region calculates is value 2.
With reference to being obtained in the example above:By the corresponding target weight of eye areas and the corresponding value 1 of eye areas and mouth Bar corresponding target weight in region and face region corresponding value 2 are as the prior probability in Bayes classifier operation.
It is by each corresponding target weight in key position region and each key position area in the embodiment of the present invention The corresponding average value in domain calculates the posteriority of the corresponding weight of the facial image by Bayesian formula as prior probability Probability, and choose weight of the weight with maximum a posteriori probability as discribed personage in the facial image.Wherein, pattra leaves This grader is to calculate the algorithm which class probability object belongs to, the original of Bayes classifier operation by Bayesian formula Reason is the prior probability by certain object, and the posterior probability of the object is calculated using Bayesian formula, i.e., the object belongs to certain A kind of probability selects the classification with maximum a posteriori probability as the classification belonging to the object.
Wherein, the image that the numerical values recited of preset resolution ratio depends under the preset resolution ratio can clearly react people The facial local feature of face image, such as face contour, eyes, face feature.Preferred 200*200 in embodiments of the present invention Resolution ratio is as preset resolution ratio.Each key position region obtains corresponding comparison result.Each key position region can To be region and the face contour region of each organ sites of face, herein to each key position region in target facial image It does not limit.
In this way under complicated imaging contexts, different measurement methods are chosen according to the different resolution ratio of image, are utilized simultaneously Depth texture, edge and the color characteristic of facial image are assisted in identifying, and the accuracy of measured body weight is increased.
For example, obtaining the facial depth map of the target facial image by depth transducer;According to Face datection algorithm Determine that the facial depth map of the target facial image is effective face depth map.Obtain the depth information for containing face Figure, but whether computer may not be able to have face in automatic identification to this figure, see whether the angle of this face pendulum closes Reason, if this face is side against camera, that computer may None- identified;Or the information of this face is imperfect, meter Calculation machine also None- identified.In obtained depth information figure, by Face datection algorithm, to ensure collected this depth letter Breath figure is effective, it is ensured that can obtain effective facial feature data in this depth information figure.Face datection refer to for The given image of any one width uses certain strategy to be scanned for it to determine whether containing face, if it is Return to position, size and the posture of face.
In one embodiment, further include:Each key position region includes:Forehead region, brow region, eyes area Domain, nasal area, face region, chin area, face contour region.
In field of image recognition, image is characterized in the proper noun of field of image recognition, the extraction of the feature of image It is a concept in computer vision and image procossing.
In one embodiment, further include:By preset depth convolutional neural networks, the face of the facial image is carried out Feature extraction.If the resolution ratio of the facial image is less than the preset resolution ratio, right by preset depth convolutional neural networks The facial image carries out feature extraction, and the wherein facial image is the image for the whole face for including hair zones.
Depth convolutional neural networks are convolutional neural networks (CNN, Convolutional Neural Networks), are A kind of deep neural network with convolutional coding structure, include at least two non-linear trainable convolutional layers, two it is nonlinear Fixed convolutional layer and full articulamentum, at least five hidden layer, is mainly used in speech analysis and field of image recognition altogether.
In practical applications, the feature of facial image can be extracted by the full articulamentum of the depth convolutional neural networks, The edge, texture and color characteristic of image can be described effectively in this way.It should be noted that due to the resolution ratio of the facial image Less than the preset resolution ratio, so indicating that the resolution ratio of the facial image is not high, clearly local feature, such as face can not be extracted The feature of contouring, feature of eyebrow etc., so what is extracted here by preset depth convolutional neural networks is entire face Image feature.
Fig. 2 is please referred to, is the flow diagram of the Avoirdupois monitoring method based on facial image in first embodiment of the invention. Include the following steps:
Step S210:Obtain a target facial image;
Step S220:Extract the facial feature data of the target facial image;
Step S230:Facial feature data according to the target facial image and the face in preset facial image sample Similarity comparison result between portion's characteristic analyzes the weight of discribed personage in the target facial image;With And
Step S240:When judging that the difference of the weight and preset human body standard weight data of the personage sets more than one When definite value, a prompt message is sent out.
In one embodiment, further include:The prompt message includes but not limited to:Information of voice prompt, picture cues letter Breath, animation prompt message etc..For example, when the weight for analyzing tester exceeds standard weight, then judge that tester is overweight, in turn Voice reminder tester is overweight and overweight quantity;If being less than standard weight, judge that tester is partially thin, and then voice reminder is surveyed Examination person is partially thin, it is proposed that tester has additional nutrients, and provides the quantity of suggestion weightening.
In one embodiment, further include:The feature of the facial image sample includes various genders, various expressions and each The feature of weight face.
In one embodiment, further include:By preset depth convolutional neural networks, the target facial image is extracted Facial feature data.Wherein, depth convolutional neural networks are convolutional neural networks (CNN, Convolutional Neural Networks), be a kind of deep neural network with convolutional coding structure, include at least two non-linear trainable convolutional layers, Two nonlinear fixed convolutional layers and full articulamentum, at least five hidden layer, is mainly used in speech analysis and image altogether Identification field.
In practical applications, the feature of facial image can be extracted by the full articulamentum of the depth convolutional neural networks, The edge, texture and color characteristic of image can be described effectively in this way.It should be noted that if the resolution ratio of the facial image is small In the preset resolution ratio, so indicating that the resolution ratio of the facial image is not high, clearly local feature can not be extracted, such as face The feature of profile, feature of eyebrow etc., so what is extracted here by preset depth convolutional neural networks is entire face The feature of image.
In one embodiment, further include:The target facial image under most different weights is obtained, and will be described more The correspondence of several different weights and the target facial image is established in the facial image sample;Judge to be monitored The cosine similarity value between facial feature data in the facial feature data of facial image and the facial image sample; Chosen in the facial image sample cosine similarity value be more than preset critical value the target facial image, and according to The correspondence analyzes the weight of discribed personage in the target facial image.
Specifically, the facial characteristics in the facial characteristics of the facial image to be monitored and the facial image sample it Between carry out cosine similarity comparison, calculate the cosine similarity between the facial image to be monitored and the facial image sample Value, wherein the number of the cosine similarity value calculated is identical as the number of the facial image sample, in other words, a facial image A cosine similarity value can be calculated between sample and the facial image to be monitored.The cosine similarity is in vector space Measurement of two vectorial angle cosine values as the size for weighing two inter-individual differences.
The numerical value of the cosine similarity value of calculating is bigger, indicates that similarity is higher.In practical applications, first, in accordance with calculating Cosine similarity value, which is arranged according to by similarity height to similarity is low, then by similarity Height starts, and the target facial image sample of preset number is chosen in the facial image sample, and then according to the target face figure Decent is judged corresponding weight.The numerical value of preset number can arbitrarily be chosen, and the number for the sample chosen certainly is more, then most The accuracy differentiated afterwards will improve.Preset critical value can arbitrarily be chosen, and in practical applications, those skilled in the art pass through A large amount of emulation experiment obtains preset critical value.
Fig. 3 is please referred to, is the flow diagram of the Avoirdupois monitoring method based on facial image in second embodiment of the invention. It is to include the following steps to the further refinement of step S220 in Fig. 2:
Step S310:Judge whether the resolution ratio of target facial image is more than preset resolution ratio;
The image that the numerical values recited of preset resolution ratio depends under the preset resolution ratio can clearly react face office Portion's feature, such as face contour, eyes, face feature.The resolution ratio of preferred 200*200 is as preset in embodiments of the present invention Resolution ratio.
In one embodiment, judge whether the resolution ratio of facial image is more than after preset resolution ratio, this method further includes: The facial image in images to be recognized is determined by Face datection and face key point location;By the people in the images to be recognized Face image is set as detection zone.By small echo (HAAR) grader or DLIB (C++library) algorithms to the image of input Face datection is carried out, supervised descent algorithm (SDM, Supervised Descent then is passed through to the image after detection Method face key point location) is carried out, wherein the face key point positioned by SDM algorithms includes:Eyebrow, eyes, nose, Face and face contour.Certain Face datection and face key point location can also be realized by other algorithms.HAAR classifies Device, including adaptively enhancing (Adaboost) algorithm, in field of image recognition, grader refers to face and non-face progress The algorithm of classification.DLIB is a kind of algorithms library of C++, can be applied to Face datection and face key point location.
Step S320A:If the resolution ratio of the target facial image is less than the preset resolution ratio, the target is extracted The facial feature data of facial image;
In one embodiment, it further includes;By preset depth convolutional neural networks, the progress to the target facial image Facial feature extraction, the wherein facial image are the image for the whole face for including hair zones.It should be noted that due to this The resolution ratio of target facial image is less than the preset resolution ratio, so indicate that the resolution ratio of the target facial image is not high, it can not Clearly local feature, such as the feature of face contour, the feature of eyes are extracted, so passing through preset depth convolution here What neural network was extracted is the feature of the image of entire face.
Step S320B:If the resolution ratio of the target facial image is more than the preset resolution ratio, the target is extracted The characteristic in each key position region in facial image.
In one embodiment, it further includes;Each key in the target facial image is extracted by the depth convolutional neural networks The feature of area, wherein each key position region include:Forehead region, brow region, eye areas, nose region Domain, face region, chin area, face contour region.Wherein the target facial image is by respectively being closed in the target facial image The image of key area composition.The face key point detected is led to such as eyes, nose, face, eyebrow and face contour Key position region in the target facial image is extracted in the position for crossing each face key point.Then according to the key after extraction Area reconstructs the image of face, can be with by the target facial image that is obtained after reconstruct to obtain the target facial image More accurately describe the regional area of the faces such as the skin of face of different weight.
In one embodiment, further include:Feature according to each key position region in target facial image and preset people Similarity comparison result between the facial characteristics of face image sample analyzes discribed personage in the target facial image Weight.
Specifically, by by the spy of the feature and the facial image sample in each key position region in the target facial image Cosine similarity comparison is carried out between sign, obtains the corresponding cosine similarity value in each key position region;It is each according to this The corresponding cosine similarity value in key position region, according to the sequence of similarity from high to low, in the facial image sample The middle target facial image sample for choosing the corresponding preset number in each key position region;
Quantity statistics are carried out to the weight of personage in the corresponding target facial image sample in each key position region, And using the most weight of sample size as the target weight in each key position region;Wherein, in each key position region In corresponding target facial image sample, the corresponding similar sample in each key position region is chosen, and extracting should The corresponding cosine similarity value of similar sample as target cosine similarity value, wherein in the similar sample weight of personage be should Target weight;
Calculate the average value of the corresponding target cosine similarity value in each key position region;By by each key The corresponding target weight in area average value corresponding with each key position region carries out Bayes's operation, obtains To the weight of the personage described in the facial image.
It is exemplified below, obtains the process of cosine similarity value.For example, setting facial image sample A, target facial image In each key position region include eye areas and face region, then facial image sample A include eye areas sample and The feature of eye areas in target facial image and facial image sample A are included eyes when comparing by face area sample The feature of area sample carries out cosine similarity comparison, and obtains the feature and facial image of eye areas in target facial image Cosine similarity value A in sample A between the feature of eye areas sample;Then by the spy in face region in target facial image It levies and is compared with the feature progress cosine similarity of face area sample in facial image sample A, and obtain in target facial image The feature and facial image sample A in face region include the cosine similarity value B between the feature of face area sample.If Further include another person's face image sample B in the example above, then according to above-mentioned target facial image and facial image sample A ratios To process, then with facial image sample B carry out cosine similarity compare.
The corresponding multiple cosine similarity values in each key position region.The numerical value of the cosine similarity value of calculating is got over Greatly, indicate that similarity is higher.In practical applications, first, in accordance with the cosine similarity value of calculating, which is pressed Arranged according to by similarity height to similarity is low, then by similarity height, choose in the facial image sample preset The target image sample of number.The numerical value of preset number can arbitrarily be chosen, and the number for the sample chosen certainly is more, then finally sentences Other accuracy will improve.
In addition, the facial image sample can be the figure of each key position sample areas composition in the people's face image sample Picture, specific reconstruct mode are identical as by each process of key position regional restructuring target facial image in target facial image.
It should be noted that the key position sample areas includes:Forehead region, brow region, eye areas, nose region Domain, face region, chin area, face contour region.Key position sample areas and target face figure in facial image sample Region as in included by key position region is corresponding, that is, if key position region includes mouth in target facial image Bar region, then key position sample areas includes face region in facial image sample, if key portion in target facial image Position region includes eye areas, then key position sample areas includes eye areas in facial image sample.
Fig. 4 is please referred to, for a kind of embodiment signal of the Avoirdupois monitoring method based on facial image proposed by the present invention Figure.
As shown in figure 4, user enters the survey body weight models of mobile terminal in advance, then by taking pictures to obtain one from taking pictures 400;Wherein, terminal 400 can be extracted from the associated facial features data of middle facial image of taking pictures, because not from this from taking pictures With facial image include different associated facial features information, otherwise cannot identify personage according to face;Separately Outside, the increase of user's weight is corresponding be exactly with lipotrophy, corresponding face relatively broadens greatly, is exactly generally Face is rounded;User's weight reduce it is corresponding be exactly with fat reduce, corresponding face is opposite to come to a point.
In Fig. 4, when the weight of user increases, cheek would generally change to B by A;When the weight of user reduces, cheek Would generally A be changed to by B.Because the extraction module 120 in mobile terminal extracts special from the face of the facial image in 400 of taking a picture Reference ceases, and then this is compared from the face feature information of facial image in 400 of taking a picture with preset facial image sample To analysis, so from found out in the preset facial image sample with from the most like particular person of facial image in 400 of taking a picture Face image, according to the correspondence of weight and the Given Face image in the facial image sample, it is possible to determine that user is at this time Weight and its situation of change.
In one embodiment, further include:Because of the increase of weight, the meat of face can become more, it will usually squeeze eye socket, eyes It can seem small;The distance of the eyes such as D to C both sides can also extrapolate the variation of weight.
In one embodiment, further include:During weight increases and decreases, the variation of nose E is small.But it can root The ratio of entire face facial area is accounted for according to nose E surface areas to calculate the variation of human body weight.
Fig. 5 is please referred to, is a kind of structural schematic diagram of mobile terminal proposed by the present invention.It please coordinate A, figure referring to Fig.1 1B。
In Figure 5, which includes:At least one processor 501, memory 502, at least one network connect Mouth 503.Various components in mobile terminal 500 are coupled by bus system 504.It is understood that bus system 504 is used for Realize the connection communication between these components.Bus system 504 further includes power bus, control in addition to including data/address bus Bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus system 504 in Figure 5.
In one embodiment, further include:User interface 505, wherein user interface 505 may include display, keyboard or Person's pointing device is (for example, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc., the information for receiving user Input operation.
It is appreciated that the memory 502 in the embodiment of the present invention can be volatile memory or nonvolatile memory, Both or may include volatile and non-volatile memory.Wherein, nonvolatile memory can be read-only memory, may be programmed Read-only memory, Erasable Programmable Read Only Memory EPROM, electrically erasable programmable read-only memory or flash memory.Volatile memory It can be random access memory, be used as External Cache.The memory 502 of system and method described herein is intended to wrap Include but be not limited to the memory of these and any other suitable type.
In addition, memory 502 stores following element, executable modules or data structures or their subset, Or their superset:Operating system 5021 and application program 5022.Wherein, operating system 5021, including various system journeys Sequence, such as ccf layer, core library layer, driving layer etc., for realizing various basic businesses and the hardware based task of processing.Its , application program 5022, including various application programs, such as media player (MediaPlayer), browser (Browser) Deng for realizing various applied business.Realize that the program of present invention method may be embodied in application program 5022.
In one embodiment, further include:By the program for calling memory 502 to store or instruction, specifically, can answer With the program or instruction stored in program 5022, processor 501 is for obtaining a target facial image;Extract the target face The facial feature data of image;According in the facial feature data of the target facial image and preset facial image sample Similarity comparison result between facial feature data analyzes the weight of discribed personage in the target facial image; And when judging that the weight of the personage is more than a setting value with the difference of preset human body standard weight data, send out one Prompt message.
The method that the embodiments of the present invention disclose can be applied in processor 501, or be realized by processor 501. Processor 501 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.In conjunction with the present invention The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and execute completion, or use decoding processor In hardware and software module combination execute completion.
Software module can be located at random access memory, flash memory, read-only memory, programmable read only memory or electrically erasable In the storage medium for writing this fields such as programmable storage, register maturation.The storage medium is located in memory 502, processing Device 501 reads the information in memory 502, in conjunction with the step of its hardware completion above method.
In one embodiment, further include:A kind of computer readable storage medium is deposited on the computer readable storage medium The Avoirdupois monitoring program based on facial image is contained, the Avoirdupois monitoring program based on facial image is run by processor 501 Shi Shixian among the above any one of them method the step of.
In one embodiment, further include:Processor 501 is additionally operable to judge whether the resolution ratio of the target facial image is big In preset resolution ratio;If the resolution ratio of the target facial image is less than the preset resolution ratio, the target face is extracted The facial feature data of image;If the resolution ratio of the target facial image is more than the preset resolution ratio, the mesh is extracted Mark the characteristic in each key position region in facial image.
In one embodiment, further include:Processor 501 is additionally operable to, by preset depth convolutional neural networks, extract institute State the facial feature data of target facial image.
In one embodiment, further include:Processor 501 is additionally operable to obtain the target face under most different weights Image, and the correspondence of the most different weights and the target facial image is established in the facial image sample In;Judge between the facial feature data in the facial feature data and the facial image sample of facial image to be monitored Cosine similarity value;The target that the cosine similarity value is more than preset critical value is chosen in the facial image sample Facial image, and analyze according to the correspondence weight of discribed personage in the target facial image.
Mobile terminal 500 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here It repeats no more.
In one embodiment, further include:Terminal in the embodiment of the present invention, which can be mobile phone, tablet computer etc., to be had wirelessly The electronic equipment or Wearable of network access facility, the embodiment of the present invention are not restricted.
The present invention is by the way that the information of user's weight to be mapped with facial image, according to reading user's facial image Face feature information remind user present judging the trend and the estimation existing weight of user of user's weight Weight physical condition, measurement method is easy, is automatically finished, and assists without other people or is measured using entity measuring equipment, It is more intelligent, the convenience for measuring operation is improved, the user experience of mobile terminal is improved.
The above described is only a preferred embodiment of the present invention, be not intended to limit the present invention in any form, though So the present invention has been disclosed as a preferred embodiment, and however, it is not intended to limit the invention, any technology people for being familiar with this profession Member, without departing from the scope of the present invention, when the technology contents using the disclosure above make a little change or modification For the equivalent embodiment of equivalent variations, as long as being the content without departing from technical solution of the present invention, according to the technical essence of the invention To any simple modification made by above example and equivalent variations and modification, the range of technical solution of the present invention is still fallen within It is interior.

Claims (10)

1. a kind of Avoirdupois monitoring method based on facial image, which is characterized in that include the following steps:
Obtain a target facial image;
Extract the facial feature data of the target facial image;
According to the facial feature data in facial feature data and the preset facial image sample of the target facial image it Between similarity comparison result, analyze the weight of discribed personage in the target facial image;And
When judging that the weight of the personage is more than a setting value with the difference of preset human body standard weight data, one is sent out Prompt message.
2. the Avoirdupois monitoring method based on facial image according to claim 1, which is characterized in that described according to the target The similarity between facial feature data in the facial feature data of facial image and preset facial image sample compares knot Fruit, the weight for analyzing discribed personage in the target facial image include before:
Judge whether the resolution ratio of the target facial image is more than preset resolution ratio;
If the resolution ratio of the target facial image is less than the preset resolution ratio, the face of the target facial image is extracted Characteristic;
If the resolution ratio of the target facial image is more than the preset resolution ratio, extracts and respectively closed in the target facial image The characteristic of key area.
3. the Avoirdupois monitoring method based on facial image according to claim 2, which is characterized in that further include:Each pass Key area includes:Forehead region, brow region, eye areas, nasal area, face region, chin area, face wheel Wide region.
4. the Avoirdupois monitoring method based on facial image according to claim 1, which is characterized in that further include:
By preset depth convolutional neural networks, the facial feature data of the target facial image is extracted.
5. the Avoirdupois monitoring method based on facial image according to claim 1, which is characterized in that further include:
Obtain the target facial image under most different weights, and by the most different weights and the target person The correspondence of face image is established in the facial image sample;
Judge between the facial feature data in the facial feature data and the facial image sample of facial image to be monitored Cosine similarity value;
The target facial image that the cosine similarity value is more than preset critical value is chosen in the facial image sample, And
The weight of discribed personage in the target facial image is analyzed according to the correspondence.
6. a kind of Avoirdupois monitoring system based on facial image, which is characterized in that including:
Acquisition module obtains a target facial image;
Extraction module extracts the facial feature data of the target facial image;
Analysis module, it is special according to the facial feature data of the target facial image and the face in preset facial image sample The similarity comparison result between data is levied, the weight of discribed personage in the target facial image is analyzed;And
Reminding module, when the difference of the weight and preset human body standard weight data of judging the personage is more than a setting value When, send out a prompt message.
7. the Avoirdupois monitoring system based on facial image according to claim 6, which is characterized in that further include:
Judgment module, judges whether the resolution ratio of the target facial image is more than preset resolution ratio;
The extraction module is specifically used for, if the resolution ratio of the target facial image is less than the preset resolution ratio, extracts The facial feature data of the target facial image;If the resolution ratio of the target facial image is more than the preset resolution ratio, Then extract the characteristic in each key position region in the target facial image.
8. the Avoirdupois monitoring system based on facial image according to claim 7, which is characterized in that further include:Each pass Key area includes:Forehead region, brow region, eye areas, nasal area, face region, chin area, face wheel Wide region.
9. a kind of mobile terminal, which is characterized in that including:
At least one processor;
Memory;And
The Avoirdupois monitoring program based on facial image being stored on the memory, the Avoirdupois monitoring based on facial image Following operation is realized when program is run by the processor:
Obtain a target facial image;
Extract the facial feature data of the target facial image;
According to the facial feature data in facial feature data and the preset facial image sample of the target facial image it Between similarity comparison result, analyze the weight of discribed personage in the target facial image;And
When judging that the weight of the personage is more than a setting value with the difference of preset human body standard weight data, one is sent out Prompt message.
10. a kind of computer readable storage medium, the body based on facial image is stored on the computer readable storage medium Weight monitoring program is realized when the Avoirdupois monitoring program based on facial image is run by processor as appointed in claim 1-5 The step of method described in one.
CN201810045686.4A 2018-01-17 2018-01-17 Avoirdupois monitoring method, system and mobile terminal based on facial image Pending CN108416253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810045686.4A CN108416253A (en) 2018-01-17 2018-01-17 Avoirdupois monitoring method, system and mobile terminal based on facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810045686.4A CN108416253A (en) 2018-01-17 2018-01-17 Avoirdupois monitoring method, system and mobile terminal based on facial image

Publications (1)

Publication Number Publication Date
CN108416253A true CN108416253A (en) 2018-08-17

Family

ID=63125980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810045686.4A Pending CN108416253A (en) 2018-01-17 2018-01-17 Avoirdupois monitoring method, system and mobile terminal based on facial image

Country Status (1)

Country Link
CN (1) CN108416253A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN109745014A (en) * 2018-12-29 2019-05-14 江苏云天励飞技术有限公司 Thermometry and Related product
CN111523501A (en) * 2020-04-27 2020-08-11 阳光保险集团股份有限公司 Body mass index prediction method and device
CN112418022A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Human body data detection method and device
CN112418025A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Weight detection method and device based on deep learning
CN114496263A (en) * 2022-04-13 2022-05-13 杭州研极微电子有限公司 Neural network model establishing method for weight estimation and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070973A1 (en) * 2011-09-15 2013-03-21 Hiroo SAITO Face recognizing apparatus and face recognizing method
US20160048721A1 (en) * 2014-08-12 2016-02-18 Joseph Cole Harper System and method for accurately analyzing sensed data
CN106469298A (en) * 2016-08-31 2017-03-01 乐视控股(北京)有限公司 Age recognition methodss based on facial image and device
CN106529400A (en) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 Mobile terminal and human body monitoring method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070973A1 (en) * 2011-09-15 2013-03-21 Hiroo SAITO Face recognizing apparatus and face recognizing method
US20160048721A1 (en) * 2014-08-12 2016-02-18 Joseph Cole Harper System and method for accurately analyzing sensed data
CN106469298A (en) * 2016-08-31 2017-03-01 乐视控股(北京)有限公司 Age recognition methodss based on facial image and device
CN106529400A (en) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 Mobile terminal and human body monitoring method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LINGYUN WEN ET AL.: "A computational approach to body mass index prediction from face images", 《IMAGE AND VISION COMPUTING》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN109745014A (en) * 2018-12-29 2019-05-14 江苏云天励飞技术有限公司 Thermometry and Related product
CN111523501A (en) * 2020-04-27 2020-08-11 阳光保险集团股份有限公司 Body mass index prediction method and device
CN111523501B (en) * 2020-04-27 2023-09-15 阳光保险集团股份有限公司 Body mass index prediction method and device
CN112418022A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Human body data detection method and device
CN112418025A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Weight detection method and device based on deep learning
CN112418022B (en) * 2020-11-10 2024-04-09 广州富港生活智能科技有限公司 Human body data detection method and device
CN114496263A (en) * 2022-04-13 2022-05-13 杭州研极微电子有限公司 Neural network model establishing method for weight estimation and readable storage medium

Similar Documents

Publication Publication Date Title
CN108416253A (en) Avoirdupois monitoring method, system and mobile terminal based on facial image
JP7395604B2 (en) Automatic image-based skin diagnosis using deep learning
CN105631439B (en) Face image processing process and device
Guarin et al. Toward an automatic system for computer-aided assessment in facial palsy
CN108717663B (en) Facial tag fraud judging method, device, equipment and medium based on micro expression
CN109146856A (en) Picture quality assessment method, device, computer equipment and storage medium
CN107688784A (en) A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
CN108229268A (en) Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN106326857A (en) Gender identification method and gender identification device based on face image
CN111222380B (en) Living body detection method and device and recognition model training method thereof
CN107194361A (en) Two-dimentional pose detection method and device
CN107609563A (en) Picture semantic describes method and device
CN109376631A (en) A kind of winding detection method and device neural network based
CN107229952A (en) The recognition methods of image and device
CN109886153A (en) A kind of real-time face detection method based on depth convolutional neural networks
CN108596087A (en) A kind of driving fatigue degree detecting regression model based on dual network result
CN112101124A (en) Sitting posture detection method and device
CN104679967B (en) A kind of method for judging psychological test reliability
CN112465773A (en) Facial nerve paralysis disease detection method based on human face muscle movement characteristics
CN116091432A (en) Quality control method and device for medical endoscopy and computer equipment
CN114067185A (en) Film evaluation system based on facial expression recognition
CN111553250B (en) Accurate facial paralysis degree evaluation method and device based on face characteristic points
CN109711315A (en) A kind of method and device of Lung neoplasm analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180817

RJ01 Rejection of invention patent application after publication