WO2020103417A1 - Bmi evaluation method and device, and computer readable storage medium - Google Patents

Bmi evaluation method and device, and computer readable storage medium

Info

Publication number
WO2020103417A1
WO2020103417A1 PCT/CN2019/088637 CN2019088637W WO2020103417A1 WO 2020103417 A1 WO2020103417 A1 WO 2020103417A1 CN 2019088637 W CN2019088637 W CN 2019088637W WO 2020103417 A1 WO2020103417 A1 WO 2020103417A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
bmi
face image
contour
evaluation
Prior art date
Application number
PCT/CN2019/088637
Other languages
French (fr)
Chinese (zh)
Inventor
石磊
马进
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020103417A1 publication Critical patent/WO2020103417A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of intelligent decision-making technology, and in particular, to a BMI evaluation method, device, and computer-readable storage medium.
  • the BMI index (Body Mass Index, referred to as Body Mass Index, also known as Body Mass Index, in English Body Body Mass Index, referred to as BMI) is a parameter related to height and weight that can reflect the body mass index / obesity index, which is divided by the weight in kilograms divided by The number is derived from the square of height in meters. It is mainly used for statistics. When it is necessary to compare and analyze the health impact of a person ’s weight on people of different heights, the BMI value is a neutral and reliable indicator, which is commonly used in the world to measure the degree of body fatness and health A standard. BMI is an indicator closely related to total body fat, which takes into account two factors of weight and height. BMI is simple, practical, and can reflect systemic overweight and obesity.
  • the traditional BMI acquisition method needs to first measure the height and weight information of the person to be measured. This information needs to be measured using a height and weight tester or other measurement sensors, and then numerically calculated to obtain the final BMI index. Instruments (usually such instruments are not easy to carry) are used for measurement, and the use of procedures is cumbersome, time-consuming, and slow, and the measurement is slow. Effective measurement in real time, quickly and conveniently.
  • the present application provides a BMI evaluation method, device, and computer-readable storage medium. Its main purpose is to quickly detect BMI values in real time and reduce the difficulty of measuring BMI by a surveyor.
  • the present application also provides a BMI evaluation method, which includes:
  • sample face image data includes the BMI value
  • An embodiment of the present application further includes a BMI evaluation device.
  • the device includes a memory and a processor.
  • the memory stores a BMI evaluation program that can run on the processor.
  • the BMI evaluation program is used by the processor. The following steps are implemented during execution:
  • sample face image data includes the BMI value
  • An embodiment of the present application also provides a computer-readable storage medium, which stores a BMI evaluation program, which can be executed by one or more processors to implement the method described above step.
  • the BMI evaluation method, device and computer-readable storage medium proposed in this application collect the face image and BMI value in advance, train the network model through a learning algorithm, and predict the BMI value of the face image to be detected according to the trained network model.
  • BMI detection can be completed automatically without complicated measurement equipment, which not only greatly reduces the measurement difficulty of BMI measurers, but also enables real-time measurement for the measurers to monitor their own health status.
  • FIG. 1 is a schematic flowchart of a BMI evaluation method provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of a human face labeled with facial feature points according to an embodiment of the present application
  • FIG. 3 is an effect diagram of edge contour extraction using a sobel operator provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a bilinear interpolation model provided by an embodiment of this application.
  • FIG. 5 is a schematic diagram of an internal structure of a BMI evaluation device provided by an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of a BMI evaluation program in a BMI evaluation device provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a BMI evaluation method provided by an embodiment of the present application.
  • the method may be executed by an apparatus, and the apparatus may be implemented by software and / or hardware.
  • the BMI evaluation method includes:
  • Step S10 Collect sample face image data, the sample face image data includes a BMI value.
  • step S20 a convolutional neural network is used to train sample face image data to obtain a training model.
  • the online server can collect 10,000 sample face images with BMI values, and train the convolutional neural network under the caffe deep learning framework.
  • the training of the convolutional neural network includes: first, uniformly cropping the face image into an image with a size of 224 * 224, and then converting into a uniform leveldb format, and finally training the convolutional neural network VGG-16 with these images.
  • the convolutional neural network used includes 1 data input layer, 13 convolutional layers, and 3 fully connected layers.
  • the number of convolution kernels of the 13 convolutional layers is 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512, and 512, respectively.
  • the fourth convolution layer and the fifth convolution layer between the seventh convolution layer and the eighth convolution layer, the fourth One convolutional layer and the fifth convolutional layer, the tenth convolutional layer and the eleventh convolutional layer, the thirteenth convolutional layer and the first fully connected layer are all connected Pooling layer, the above 13 convolutional layers and 3 fully connected layers are all processed with ReLU (Nonlinear Activation Function).
  • ReLU Networkar Activation Function
  • the last layer of the VGG-16 network model is removed and retrained, and finally the softmax function is used to output the probability values of three categories of values: extremely low BMI, normal BMI, and extremely high BMI.
  • the output value is the category value corresponding to the maximum probability value.
  • the softmax function can "compress" a K-dimensional vector z containing any real number into another K-dimensional real vector ⁇ (z), so that the range of each element is between (0, 1), and the sum of all elements Is 1. For example, if the input vector [BMI is extremely low, BMI is normal, and BMI is extremely high], the corresponding Softmax function value is [0.2, 0.5, 0.3], then the item with the largest weight in the output vector corresponds to the maximum value in the input vector "BMI normal".
  • Step S30 Acquire a face image to be detected, locate key feature points of the face in the face image to be detected, and obtain key points of the face.
  • the image collection unit of the offline terminal such as the camera of the mobile phone
  • the image collection unit of the offline terminal can collect the face image of the user to be detected, and send the face image to the online server through the network transmission unit, and simultaneously send the image
  • the camera parameters of the collection unit are also sent to the online server, and the effect image of the collected face image is displayed synchronously on the mobile phone.
  • the method for locating the key feature points of the face in the face image is as follows.
  • an active shape model Active Shape Model, ASM
  • ASM Active Shape Model
  • the basic idea of the ASM algorithm is to combine the texture features of human faces with the position constraints between each feature point.
  • the ASM algorithm is divided into training and searching steps. During training, the position constraints of each feature point are established, and the local features of each specific point are constructed. When searching, match iteratively.
  • the basic principle of the PCA processing is as follows: m pieces of n-dimensional data are provided, 1) the original data is composed of n rows and m columns of matrix X; 2) each row of X (representing an attribute field) is zero-averaged, That is, subtract the mean of this row; 3) find the covariance matrix; 4) find the eigenvalues of the covariance matrix and the corresponding eigenvectors; 5) arrange the eigenvectors in rows from top to bottom according to the size of the corresponding eigenvalues Form a matrix, take the first k rows to form a matrix P; 6) is the data after dimension reduction to k dimension.
  • each feature point can find a new location during each iteration of the search process.
  • Gradient features are generally used for local features to prevent changes in illumination. Some methods extract along the normal direction of the edge, and some methods extract in the rectangular area near the feature point.
  • ASM search steps as follows: first, calculate the position of the eyes (or eyes and mouth), make simple scale and rotation changes, and align the face; then, match each local feature point (often using Mahalanobis distance) , Calculate the new position; get the parameters of the affine transformation, and iterate until convergence.
  • multi-scale methods are often used to accelerate. The search process eventually converges to the original high-resolution image.
  • Step S40 Perform face contour extraction based on key points of the face to obtain the face contour.
  • Sobel operator is a discrete differential operator, which combines Gaussian smoothing and differential differentiation to calculate the approximate gradient of the image gray function.
  • the basic principle is to convolve the incoming image pixels. The essence of convolution is to find the gradient value, or to give a weighted average, where the weight is the so-called convolution kernel; Threshold value is used to determine the edge information. At the edge of the image, the pixel value will change significantly. One way to express this change is to use derivatives.
  • Fig. 3 is an effect diagram of edge contour extraction using the sobel operator.
  • step S50 the contour of the human face is stretched in proportion to the perspective perspective to obtain the pre-processed human face image.
  • a two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to the perspective.
  • the source image size is m ⁇ n and the target image is a ⁇ b.
  • the side length ratios of the two images are: m / a and n / b. Note that usually this ratio is not an integer.
  • the (i, j) th pixel (i row and j column) of the target image can be returned to the source image by the side length ratio.
  • the corresponding coordinates are (i ⁇ m / a, j ⁇ n / b).
  • this corresponding coordinate is generally not an integer, and non-integer coordinates cannot be used on discrete data such as images.
  • Q 12 , Q 22 , Q 11 and Q 21 are known , but the point to be interpolated is point P, which requires bilinear interpolation.
  • the face contour while stretching the face contour, it also includes the RGB chroma component of each pixel of the face contour according to the camera parameters, chromatic correction and brightness correction point by point to reduce the image acquisition environment The effects of light, camera parameters, etc. Face image after contour extraction and distortion correction,
  • Step S60 Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  • the evaluation BMI value of the face image may be extremely low BMI, normal BMI or extremely high BMI, and the evaluation BMI value is returned to the offline terminal in real time, such as a smart phone, a tablet computer, or a portable computer.
  • the BMI evaluation method proposed in this embodiment further, in another embodiment of the method of the present application, the method further includes the following steps after step S60:
  • the user may choose to upload the actual measured BMI value of the user to perform fine-tuning training on the learning model to achieve rapid update iteration of the model.
  • fine-tuning the learning model taking the aforementioned 10,000 face sample as an example, the learning rate of the first 8000 small batch samples is set to 0.001, the learning rate of the last 2,000 small batch samples is set to 0.0001, and the small batch size of each iteration is 300, the momentum value is set to 0.9, and the weight attenuation value is 0.0005.
  • This application also provides a BMI evaluation device.
  • FIG. 2 it is a schematic diagram of the internal structure of the device provided by an embodiment of the present application.
  • the device 1 may be a personal computer (Personal Computer, PC), or may be a terminal device such as a smartphone, tablet computer, or portable computer.
  • the BMI evaluation device 1 includes at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the BMI evaluation device 1, such as a hard disk of the BMI evaluation device 1.
  • the memory 11 may also be an external storage device of the BMI evaluation device 1, for example, a plug-in hard disk equipped on the BMI evaluation device 1, a smart memory card (Smart Media, Card, SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both the internal storage unit of the BMI evaluation device 1 and the external storage device.
  • the memory 11 can be used not only to store application software and various types of data installed in the BMI evaluation device 1, such as codes of the BMI evaluation program 01, but also to temporarily store data that has been or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip for running the program code or processing stored in the memory 11 Data, for example, the BMI evaluation program 01 is executed.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip for running the program code or processing stored in the memory 11 Data, for example, the BMI evaluation program 01 is executed.
  • the communication bus 13 is used to realize connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the device 1 and other electronic devices.
  • the device 1 may further include a user interface.
  • the user interface may include a display and an input unit such as a keyboard.
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an organic light-emitting diode (OLED) touch device, or the like.
  • the display may also be appropriately referred to as a display screen or a display unit, for displaying information processed in the BMI evaluation device 1 and for displaying a visual user interface.
  • FIG. 2 only shows the BMI evaluation device 1 having the components 11-14 and the BMI evaluation program 01.
  • FIG. 1 does not constitute a limitation on the BMI evaluation device 1, and may include There are fewer or more components than shown, or some components are combined, or different components are arranged.
  • the BMI evaluation program 01 is stored in the memory 11; the processor 12 implements the following steps when executing the BMI evaluation program 01 stored in the memory 11:
  • Step S10 Collect sample face image data, the sample face image data includes a BMI value.
  • step S20 a convolutional neural network is used to train sample face image data to obtain a training model.
  • the online server can collect 10,000 sample face images with BMI values, and train the convolutional neural network under the caffe deep learning framework.
  • the training of the convolutional neural network includes: first, uniformly cropping the face image into an image with a size of 224 * 224, and then converting into a uniform leveldb format, and finally training the convolutional neural network VGG-16 with these images.
  • the convolutional neural network used includes 1 data input layer, 13 convolutional layers, and 3 fully connected layers.
  • the number of convolution kernels of the 13 convolutional layers is 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, 512, and 512, respectively.
  • the fourth convolution layer and the fifth convolution layer between the seventh convolution layer and the eighth convolution layer, the fourth Between the convolutional layer and the fifth convolutional layer, between the tenth convolutional layer and the eleventh convolutional layer, and between the thirteenth convolutional layer and the first fully connected layer Pooling layer, the above 13 convolutional layers and 3 fully connected layers are all processed with ReLU (Nonlinear Activation Function).
  • ReLU Networkar Activation Function
  • the last layer of the VGG-16 network model is removed and retrained, and finally the softmax function is used to output the probability values of three categories of values: extremely low BMI, normal BMI, and extremely high BMI.
  • the output value is the category value corresponding to the maximum probability value.
  • the softmax function can "compress" a K-dimensional vector z containing any real number into another K-dimensional real vector ⁇ (z), so that the range of each element is between (0, 1), and the sum of all elements Is 1. For example, if the input vector [BMI is extremely low, BMI is normal, and BMI is extremely high], the corresponding Softmax function value is [0.2, 0.5, 0.3], then the item with the largest weight in the output vector corresponds to the maximum value in the input vector "BMI normal".
  • Step S30 Acquire a face image to be detected, locate key feature points of the face in the face image to be detected, and obtain key points of the face.
  • the image collection unit of the offline terminal such as the camera of the mobile phone
  • the image collection unit of the offline terminal can collect the face image of the user to be detected, and send the face image to the online server through the network transmission unit, and simultaneously send the image
  • the camera parameters of the collection unit are also sent to the online server, and the effect image of the collected face image is displayed synchronously on the mobile phone.
  • the method for locating the key feature points of the face in the face image is as follows.
  • an active shape model Active Shape Model, ASM
  • ASM Active Shape Model
  • the basic idea of the ASM algorithm is to combine the texture features of human faces with the position constraints between each feature point.
  • the ASM algorithm is divided into training and searching steps. During training, the position constraints of each feature point are established, and the local features of each specific point are constructed. When searching, match iteratively.
  • the basic principle of the PCA processing is as follows: m pieces of n-dimensional data are provided, 1) the original data is composed of n rows and m columns of matrix X; 2) each row of X (representing an attribute field) is zero-averaged, That is, subtract the mean of this row; 3) find the covariance matrix; 4) find the eigenvalues of the covariance matrix and the corresponding eigenvectors; 5) arrange the eigenvectors in rows from top to bottom according to the corresponding eigenvalues Form a matrix, take the first k rows to form a matrix P; 6) is the data after dimension reduction to k dimension.
  • each feature point can find a new location during each iteration of the search process.
  • Gradient features are generally used for local features to prevent changes in illumination. Some methods extract along the normal direction of the edge, and some methods extract in the rectangular area near the feature point.
  • ASM search steps as follows: first, calculate the position of the eyes (or eyes and mouth), make simple scale and rotation changes, and align the face; then, match each local feature point (often using Mahalanobis distance) , Calculate the new position; get the parameters of the affine transformation, and iterate until convergence.
  • multi-scale methods are often used to accelerate. The search process eventually converges to the original high-resolution image.
  • Step S40 Perform face contour extraction based on key points of the face to obtain the face contour.
  • Sobel operator is a discrete differential operator, which combines Gaussian smoothing and differential differentiation to calculate the approximate gradient of the image gray function.
  • the basic principle is to convolve the incoming image pixels. The essence of convolution is to find the gradient value, or to give a weighted average, where the weight is the so-called convolution kernel; Threshold value is used to determine the edge information. At the edge of the image, the pixel value will change significantly. One way to express this change is to use derivatives.
  • Fig. 3 is an effect diagram of edge contour extraction using the sobel operator.
  • step S50 the contour of the human face is stretched in proportion to the perspective perspective to obtain the pre-processed human face image.
  • a two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to the perspective.
  • the source image size is m ⁇ n and the target image is a ⁇ b.
  • the side length ratios of the two images are: m / a and n / b. Note that usually this ratio is not an integer.
  • the (i, j) th pixel (i row and j column) of the target image can be returned to the source image by the side length ratio.
  • the corresponding coordinates are (i ⁇ m / a, j ⁇ n / b).
  • this corresponding coordinate is generally not an integer, and non-integer coordinates cannot be used on discrete data such as images.
  • Q 12 , Q 22 , Q 11 and Q 21 are known , but the point to be interpolated is point P, which requires bilinear interpolation.
  • the face contour while stretching the face contour, it also includes the RGB chroma component of each pixel of the face contour according to the camera parameters, chromatic correction and brightness correction point by point to reduce the image acquisition environment The effects of light, camera parameters, etc. Face image after contour extraction and distortion correction,
  • Step S60 Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  • the evaluation BMI value of the face image may be extremely low BMI, normal BMI, or extremely high BMI, and the evaluation BMI value is returned to an offline terminal in real time, such as a smartphone, tablet computer, or portable computer.
  • the BMI evaluation method proposed in this embodiment further, in another embodiment of the method of the present application, the method further includes the following steps after step S60:
  • the user may choose to upload the actual measured BMI value of the user to perform fine-tuning training on the learning model to achieve rapid update iteration of the model.
  • fine-tuning the learning model taking the aforementioned 10,000 face sample as an example, the learning rate of the first 8000 small batch samples is set to 0.001, the learning rate of the last 2,000 small batch samples is set to 0.0001, and the small batch size of each iteration is 300, the momentum value is set to 0.9, and the weight attenuation value is 0.0005.
  • the BMI evaluation program may also be divided into one or more modules, and the one or more modules are stored in the memory 11 and are processed by one or more processors (in this embodiment, processing 12) is executed to complete this application.
  • the module referred to in this application refers to a series of computer program instruction segments capable of performing specific functions, and is used to describe the execution process of the BMI evaluation program in the BMI evaluation device.
  • FIG. 3 it is a schematic diagram of a program module of a BMI evaluation program in an embodiment of a BMI evaluation device of the present application.
  • the BMI evaluation program may be divided into a sample data collection module 10 and a sample data model training module 20.
  • the sample data collection module 10 is used to: collect sample face image data, the sample face image data includes a BMI value;
  • the sample data model training module 20 is used to: train the sample face image data using a convolutional neural network to obtain the training model;
  • the face key point positioning module 30 is used to: acquire a face image to be detected, locate key feature points of the face to be detected, and obtain face key points;
  • the face contour extraction module 40 is used to: extract the face contour according to the key points of the face to obtain the face contour;
  • Face contour stretching module 50 used to perform proportional stretching on the face contour according to perspective perspective to obtain pre-processed face image
  • Face image BMI value prediction module 60 used to perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  • sample data acquisition module 10 sample data model training module 20, face key point positioning module 30, face contour extraction module 40, face contour stretching module 50, face image BMI value prediction module 60 and other program modules are executed
  • face key point positioning module 30 face contour extraction module 40, face contour stretching module 50, face image BMI value prediction module 60 and other program modules are executed
  • face contour extraction module 40 face contour stretching module 50, face image BMI value prediction module 60 and other program modules are executed
  • an embodiment of the present application further proposes a computer-readable storage medium on which a BMI evaluation program is stored.
  • the BMI evaluation program may be executed by one or more processors to implement the following operations:
  • Step S10 Collect sample face image data, the sample face image data includes a BMI value
  • Step S20 the convolutional neural network is used to train the sample face image data to obtain the training model
  • Step S30 Acquire a face image to be detected, locate key feature points of the face to be detected, and obtain key points of the face;
  • Step S40 Perform face contour extraction based on key points of the face to obtain the face contour
  • Step S50 the contour of the human face is stretched in proportion to the perspective perspective to obtain a pre-processed human face image
  • Step S60 Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  • the BMI evaluation method, device and computer-readable storage medium proposed in this application collect the face image and BMI value in advance, train the network model through a learning algorithm, and predict the BMI value of the face image to be detected according to the trained network model.
  • BMI detection can be completed automatically without complicated measurement equipment, which not only greatly reduces the measurement difficulty of BMI measurers, but also enables real-time measurement for the measurers to monitor their own health status.
  • the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product is stored in a storage medium (such as ROM / RAM as described above) , Magnetic disks, optical disks), including several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to perform the method described in each embodiment of the present application.

Abstract

The present invention relates to the technical field of intelligent decision making, and provides a BMI evaluation method and device, and a computer readable storage medium. The method comprises: collecting sample face image data, the sample face image data comprising a BMI value (S10); training the sample face image data by adopting a convolutional neural network to obtain a training model (S20); obtaining a face image to be detected, positioning face key feature points of said face image, and obtaining the face key points (S30); performing face contour extraction according to the face key points to obtain a face contour (S40); performing equal-proportion stretching on the face contour according to a perspective view angle to obtain a preprocessed face image (S50); and performing category prediction on the preprocessed face image according to the training model to obtain an evaluation BMI value of said face image (S60). By means of the method, the BMI value can be rapidly detected in real time, and the BMI measurement difficulty of a measurer is reduced.

Description

一种BMI评测方法、装置及计算机可读存储介质BMI evaluation method, device and computer readable storage medium
本申请要求于2018年11月20日提交中国专利局,申请号为201811384493.8、发明名称为“一种BMI评测方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application submitted to the China Patent Office on November 20, 2018, with the application number 201811384493.8 and the invention titled "A BMI evaluation method, device, and computer-readable storage medium." The reference is incorporated in this application.
技术领域Technical field
本申请涉及智能决策技术领域,尤其涉及一种BMI评测方法、装置及计算机可读存储介质。The present application relates to the field of intelligent decision-making technology, and in particular, to a BMI evaluation method, device, and computer-readable storage medium.
背景技术Background technique
BMI指数(身体质量指数,简称体质指数,又称体重指数,英文为Body Mass Index,简称BMI)是一个与身高体重有关的能反映身体质量指数/肥胖指数的参数,是用体重公斤数除以身高米数平方得出的数字。主要用于统计,当需要比较及分析一个人的体重对于不同高度的人所带来的健康影响时,BMI值是一个中立而可靠的指标,是国际上常用的衡量人体胖瘦程度以及是否健康的一个标准。BMI是与体内脂肪总量密切相关的指标,该指标考虑了体重和身高两个因素。BMI简单、实用、可反映全身性超重和肥胖。在测量身体因超重而面临心脏病、高血压等风险时,比单纯的以体重来认定,更具准确性。目前评测BMI较普遍采用的计算方法有两种:一种是:成年:〔身高(cm)-100〕×0.9=标准体重(kg)另一种是:男性:身高(cm)-105=标准体重(kg),女性:身高(cm)-100=标准体重(kg)。The BMI index (Body Mass Index, referred to as Body Mass Index, also known as Body Mass Index, in English Body Body Mass Index, referred to as BMI) is a parameter related to height and weight that can reflect the body mass index / obesity index, which is divided by the weight in kilograms divided by The number is derived from the square of height in meters. It is mainly used for statistics. When it is necessary to compare and analyze the health impact of a person ’s weight on people of different heights, the BMI value is a neutral and reliable indicator, which is commonly used in the world to measure the degree of body fatness and health A standard. BMI is an indicator closely related to total body fat, which takes into account two factors of weight and height. BMI is simple, practical, and can reflect systemic overweight and obesity. When measuring the body's risk of heart disease and high blood pressure due to being overweight, it is more accurate than simply weight. At present, there are two commonly used calculation methods for evaluating BMI: one is: adult: [height (cm) -100] × 0.9 = standard weight (kg) and the other is: male: height (cm) -105 = standard Weight (kg), female: height (cm)-100 = standard weight (kg).
传统的BMI获取方式,需要首先测量待测者的身高和体重信息,这些信息需要利用身高体重测试仪或人工使用其他测量传感器进行,再进行数值计算,得出最终的BMI指标,不仅需要特定的仪器(通常这类仪器不便于携带)进行测量,而且使用程序繁琐耗时耗力,而且测量缓慢,对于需要及时测量身高体重的快速发育的青少年儿童,和需要降低BMI的减肥人群,无法做到实时快捷方便地有效测量。The traditional BMI acquisition method needs to first measure the height and weight information of the person to be measured. This information needs to be measured using a height and weight tester or other measurement sensors, and then numerically calculated to obtain the final BMI index. Instruments (usually such instruments are not easy to carry) are used for measurement, and the use of procedures is cumbersome, time-consuming, and slow, and the measurement is slow. Effective measurement in real time, quickly and conveniently.
发明内容Summary of the invention
本申请提供一种BMI评测方法、装置及计算机可读存储介质,其主要目的在于快速实时检测BMI值,降低测量者的测量BMI的难度。The present application provides a BMI evaluation method, device, and computer-readable storage medium. Its main purpose is to quickly detect BMI values in real time and reduce the difficulty of measuring BMI by a surveyor.
为实现上述目的,本申请还提供一种BMI评测方法,该方法包括:To achieve the above purpose, the present application also provides a BMI evaluation method, which includes:
采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Collect sample face image data, the sample face image data includes the BMI value;
采用卷积神经网络训练样本人脸图像数据,获得训练模型;Use convolutional neural network to train the sample face image data to obtain the training model;
获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, locating key feature points of the face in the face image to be detected, and acquiring key points of the face;
根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;According to the key points of the face, extract the face contour to obtain the face contour;
对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image;
根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
本申请实施例还包括一种BMI评测装置,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的BMI评测程序,所述BMI评测程序被所述处理器执行时实现如下步骤:An embodiment of the present application further includes a BMI evaluation device. The device includes a memory and a processor. The memory stores a BMI evaluation program that can run on the processor. The BMI evaluation program is used by the processor. The following steps are implemented during execution:
采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Collect sample face image data, the sample face image data includes the BMI value;
采用卷积神经网络训练样本人脸图像数据,获得训练模型;Use convolutional neural network to train the sample face image data to obtain the training model;
获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, locating key feature points of the face in the face image to be detected, and acquiring key points of the face;
根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;According to the key points of the face, extract the face contour to obtain the face contour;
对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image;
根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有BMI评测程序,所述程序可被一个或者多个处理器执行,以实现上述所述的方法的步骤。An embodiment of the present application also provides a computer-readable storage medium, which stores a BMI evaluation program, which can be executed by one or more processors to implement the method described above step.
本申请提出的BMI评测方法、装置及计算机可读存储介质,通过预先采集人脸图像与BMI值,经过学习算法训练网络模型,根据训练好的网络模型预测待检测的人脸图像的BMI值,无需复杂的测量设备,就能自动完成BMI检测,不仅大大降低了BMI测量者的测量难度,还能实时测量供测量者监控 自身的健康状况。The BMI evaluation method, device and computer-readable storage medium proposed in this application collect the face image and BMI value in advance, train the network model through a learning algorithm, and predict the BMI value of the face image to be detected according to the trained network model. BMI detection can be completed automatically without complicated measurement equipment, which not only greatly reduces the measurement difficulty of BMI measurers, but also enables real-time measurement for the measurers to monitor their own health status.
附图说明BRIEF DESCRIPTION
图1为本申请一实施例提供的BMI评测方法的流程示意图;1 is a schematic flowchart of a BMI evaluation method provided by an embodiment of this application;
图2为本申请一实施例提供的标记脸部特征点的人脸示意图;FIG. 2 is a schematic diagram of a human face labeled with facial feature points according to an embodiment of the present application;
图3为本申请一实施例提供的采用sobel算子进行边缘轮廓提取的效果图;3 is an effect diagram of edge contour extraction using a sobel operator provided by an embodiment of the present application;
图4为本申请一实施例提供的双线性插值的模型示意图;4 is a schematic diagram of a bilinear interpolation model provided by an embodiment of this application;
图5为本申请一实施例提供的BMI评测装置的内部结构示意图;5 is a schematic diagram of an internal structure of a BMI evaluation device provided by an embodiment of the present application;
图6为本申请一实施例提供的BMI评测装置中BMI评测程序的模块示意图。6 is a schematic block diagram of a BMI evaluation program in a BMI evaluation device provided by an embodiment of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional characteristics and advantages of the present application will be further described in conjunction with the embodiments and with reference to the drawings.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described herein are only used to explain the present application, and are not used to limit the present application.
本申请提供一种BMI评测方法。参照图1所示,图1为本申请一实施例提供的BMI评测方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。This application provides a BMI evaluation method. Referring to FIG. 1, FIG. 1 is a schematic flowchart of a BMI evaluation method provided by an embodiment of the present application. The method may be executed by an apparatus, and the apparatus may be implemented by software and / or hardware.
在本实施例中,BMI评测方法包括:In this embodiment, the BMI evaluation method includes:
步骤S10,采集样本人脸图像数据,所述样本人脸图像数据包括BMI值。Step S10: Collect sample face image data, the sample face image data includes a BMI value.
步骤S20,采用卷积神经网络训练样本人脸图像数据,获得训练模型。In step S20, a convolutional neural network is used to train sample face image data to obtain a training model.
具体的,可以由线上服务器收集一万张带有BMI值的样本人脸图像,在caffe深度学习框架下训练卷积神经网络。Specifically, the online server can collect 10,000 sample face images with BMI values, and train the convolutional neural network under the caffe deep learning framework.
所述卷积神经网络的训练包括:首先将人脸图像统一剪裁为大小为224*224的图像,再转成统一的leveldb格式,最后用这些图像训练卷积神经网络VGG-16。所使用的卷积神经网络包括1个数据输入层,13个卷积层,3个全连接层。13个卷积层的卷积核个数分别为64、64、128、128、256、256、256、512、512、512、512、512、512。第2个卷积层与第3个卷积层之间、第4个卷积层与第5个卷积层之间、第7个卷积层与第8个卷积层之间、第4个卷积层与第5个卷积层之间、第10个卷积层与第11个卷积层之间、第13 个卷积层和第1个全连接层之间,均连接了一个池化层,上述13个卷积层和3个全连接层均用ReLU(非线性激活函数)进行处理。将VGG-16网络模型的最后一层去掉重新训练,最后采用softmax函数输出BMI极低、BMI正常、BMI极高三个类别值的概率值,输出值为最大概率值对应的类别值。softmax函数能将一个含任意实数的K维向量z“压缩”到另一个K维实向量σ(z)中,使得每一个元素的范围都在(0,1)之间,并且所有元素的和为1。例如,输入向量[BMI极低,BMI正常,BMI极高]对应的Softmax函数的值为[0.2,0.5,0.3],那么输出向量中拥有最大权重的项对应着输入向量中的最大值“BMI正常”。The training of the convolutional neural network includes: first, uniformly cropping the face image into an image with a size of 224 * 224, and then converting into a uniform leveldb format, and finally training the convolutional neural network VGG-16 with these images. The convolutional neural network used includes 1 data input layer, 13 convolutional layers, and 3 fully connected layers. The number of convolution kernels of the 13 convolutional layers is 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, and 512, respectively. Between the second convolution layer and the third convolution layer, between the fourth convolution layer and the fifth convolution layer, between the seventh convolution layer and the eighth convolution layer, the fourth One convolutional layer and the fifth convolutional layer, the tenth convolutional layer and the eleventh convolutional layer, the thirteenth convolutional layer and the first fully connected layer are all connected Pooling layer, the above 13 convolutional layers and 3 fully connected layers are all processed with ReLU (Nonlinear Activation Function). The last layer of the VGG-16 network model is removed and retrained, and finally the softmax function is used to output the probability values of three categories of values: extremely low BMI, normal BMI, and extremely high BMI. The output value is the category value corresponding to the maximum probability value. The softmax function can "compress" a K-dimensional vector z containing any real number into another K-dimensional real vector σ (z), so that the range of each element is between (0, 1), and the sum of all elements Is 1. For example, if the input vector [BMI is extremely low, BMI is normal, and BMI is extremely high], the corresponding Softmax function value is [0.2, 0.5, 0.3], then the item with the largest weight in the output vector corresponds to the maximum value in the input vector "BMI normal".
步骤S30,获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点。Step S30: Acquire a face image to be detected, locate key feature points of the face in the face image to be detected, and obtain key points of the face.
具体的,可以通过线下终端的图像采集单元,如手机的摄像头,采集待检测的用户人脸图像,并通过网络传送单元,将所述人脸图像发送到线上服务器,同时将所述图像采集单元的相机参数也一并发送给线上服务器,手机上同步显示采集的人脸图像的效果图。Specifically, the image collection unit of the offline terminal, such as the camera of the mobile phone, can collect the face image of the user to be detected, and send the face image to the online server through the network transmission unit, and simultaneously send the image The camera parameters of the collection unit are also sent to the online server, and the effect image of the collected face image is displayed synchronously on the mobile phone.
具体的,对人脸图像的人脸关键特征点进行定位的方法如下。可选的,本实施例采用主动形状模型(Active Shape Model,ASM)算法对人脸关键特征点进行定位。Specifically, the method for locating the key feature points of the face in the face image is as follows. Optionally, in this embodiment, an active shape model (Active Shape Model, ASM) algorithm is used to locate key feature points of the face.
该ASM算法的基本思路是:将人脸的纹理特征和各个特征点之间的位置约束相结合。ASM算法分为训练和搜索两步。训练时,建立各个特征点的位置约束,构建各个特定点的局部特征。搜索时,迭代地进行匹配。The basic idea of the ASM algorithm is to combine the texture features of human faces with the position constraints between each feature point. The ASM algorithm is divided into training and searching steps. During training, the position constraints of each feature point are established, and the local features of each specific point are constructed. When searching, match iteratively.
ASM的训练步骤具体如下:首先,构建形状模型:搜集n个人脸的训练样本(n=400);手动标记脸部特征点,如图2所示,图2为标记脸部特征点的人脸示意图;将训练集中特征点的坐标串成特征向量;对形状进行归一化和对齐(对齐采用Procrustes方法);对对齐后的形状特征做PCA处理。所述PCA处理的基本原理为:设有m条n维数据,1)将原始数据按列组成n行m列矩阵X;2)将X的每一行(代表一个属性字段)进行零均值化,即减去这一行的均值;3)求出协方差矩阵;4)求出协方差矩阵的特征值及对应的特征向量r;5)将特征向量按对应特征值大小从上到下按行排列成矩阵,取前k行组成矩阵P;6)即为降维到k维后的数据。The training steps of ASM are as follows: First, build a shape model: collect n training samples of face (n = 400); manually mark facial feature points, as shown in Figure 2, and Figure 2 is the face labeled with facial feature points Schematic diagram; the coordinate string of the feature points in the training set is transformed into a feature vector; the shape is normalized and aligned (alignment uses Procrustes method); PCA processing is performed on the aligned shape features. The basic principle of the PCA processing is as follows: m pieces of n-dimensional data are provided, 1) the original data is composed of n rows and m columns of matrix X; 2) each row of X (representing an attribute field) is zero-averaged, That is, subtract the mean of this row; 3) find the covariance matrix; 4) find the eigenvalues of the covariance matrix and the corresponding eigenvectors; 5) arrange the eigenvectors in rows from top to bottom according to the size of the corresponding eigenvalues Form a matrix, take the first k rows to form a matrix P; 6) is the data after dimension reduction to k dimension.
接着,为每个特征点构建局部特征。目的是在每次迭代搜索过程中每个 特征点可以寻找新的位置。局部特征一般用梯度特征,以防光照变化。有的方法沿着边缘的法线方向提取,有的方法在特征点附近的矩形区域提取。Next, construct local features for each feature point. The purpose is that each feature point can find a new location during each iteration of the search process. Gradient features are generally used for local features to prevent changes in illumination. Some methods extract along the normal direction of the edge, and some methods extract in the rectangular area near the feature point.
接着进行ASM的搜索步骤,具体如下:首先,计算眼睛(或者眼睛和嘴巴)的位置,做简单的尺度和旋转变化,对齐人脸;接着,匹配每个局部特征点(常采用马氏距离),计算新的位置;得到仿射变换的参数,迭代直到收敛。另外,常采用多尺度的方法加速。搜索的过程最终收敛到高分辨率的原图像上。Then perform the ASM search steps as follows: first, calculate the position of the eyes (or eyes and mouth), make simple scale and rotation changes, and align the face; then, match each local feature point (often using Mahalanobis distance) , Calculate the new position; get the parameters of the affine transformation, and iterate until convergence. In addition, multi-scale methods are often used to accelerate. The search process eventually converges to the original high-resolution image.
步骤S40,根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓。Step S40: Perform face contour extraction based on key points of the face to obtain the face contour.
在获取能够标识眉毛、下颌的人脸关键点后,以此为依据确定眼睛、鼻子和嘴巴的相对坐标,然后进行人脸轮廓提取。可选的,采用sobel算子进行人脸轮廓提取,剔除人脸区域之外的背景。Sobel算子是一个离散微分算子,它结合了高斯平滑和微分求导,用来计算图像灰度函数的近似梯度。其基本原理是对传进来的图像像素做卷积,卷积的实质是在求梯度值,或者说给了一个加权平均,其中权值就是所谓的卷积核;然后对生成的新像素灰度值做阈值运算,以此来确定边缘信息。图像边缘,相素值会发生显著的变化了。表示这一改变的一个方法是使用导数。梯度值的大变预示着图像中内容的显著变化。若G x是对原图x方向上的卷积,G y是对原图y方向上的卷积,原图中的作用点像素值通过卷积之后为:
Figure PCTCN2019088637-appb-000001
得到像素点新的像素值之后,给定一个阈值就可以得到sobel算子计算出的图像边缘了。如图3所示,图3是采用sobel算子进行边缘轮廓提取的效果图。
After obtaining the key points of the face that can identify the eyebrows and jaws, determine the relative coordinates of the eyes, nose and mouth based on this, and then extract the face contour. Optionally, the sobel operator is used for face contour extraction, and the background outside the face area is removed. Sobel operator is a discrete differential operator, which combines Gaussian smoothing and differential differentiation to calculate the approximate gradient of the image gray function. The basic principle is to convolve the incoming image pixels. The essence of convolution is to find the gradient value, or to give a weighted average, where the weight is the so-called convolution kernel; Threshold value is used to determine the edge information. At the edge of the image, the pixel value will change significantly. One way to express this change is to use derivatives. A large change in gradient value indicates a significant change in the content of the image. If G x is the convolution of the original image in the x direction, and G y is the convolution of the original image in the y direction, the pixel value of the action point in the original image after passing the convolution is:
Figure PCTCN2019088637-appb-000001
After obtaining the new pixel value of the pixel, given a threshold, the image edge calculated by the sobel operator can be obtained. As shown in Fig. 3, Fig. 3 is an effect diagram of edge contour extraction using the sobel operator.
步骤S50,对人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像。In step S50, the contour of the human face is stretched in proportion to the perspective perspective to obtain the pre-processed human face image.
可选的,对人脸轮廓根据透视视角采用二维线性插值算法进行等比例拉伸。假设源图像大小为m×n,目标图像为a×b。那么两幅图像的边长比分别为:m/a和n/b。注意,通常这个比例不是整数,编程存储的时候要用浮点型。目标图像的第(i,j)个像素点(i行j列)可以通过边长比对应回源图像。其对应坐标为(i×m/a,j×n/b)。显然,这个对应坐标一般来说不是整数,而非 整数的坐标是无法在图像这种离散数据上使用的。双线性插值通过寻找距离这个对应坐标最近的四个像素点,来计算该点的值(灰度值或者RGB值)。若图像为灰度图像,那么(i,j)点的灰度值的数学计算模型是:,f(x,y)=b 1+b 2x+b 3y+b 4xy。其中b 1,b 2,b 3,b 4是相关的系数。关于其计算过程如下:如图4所示,图4为双线性插值的模型示意图。已知Q 12,Q 22,Q 11,Q 21,但是要插值的点为P点,这就要用双线性插值了,首先在x轴方向上,对R 1和R 2两个点进行插值,然后根据R 1和R 2对P点进行插值,这就是双线性插值。 Optionally, a two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to the perspective. Assume that the source image size is m × n and the target image is a × b. Then the side length ratios of the two images are: m / a and n / b. Note that usually this ratio is not an integer. When programming storage, use floating point. The (i, j) th pixel (i row and j column) of the target image can be returned to the source image by the side length ratio. The corresponding coordinates are (i × m / a, j × n / b). Obviously, this corresponding coordinate is generally not an integer, and non-integer coordinates cannot be used on discrete data such as images. Bilinear interpolation calculates the value (gray value or RGB value) of this point by looking for the four pixels closest to the corresponding coordinate. If the image is a gray-scale image, then the mathematical calculation model of the gray value of (i, j) point is: f (x, y) = b 1 + b 2 x + b 3 y + b 4 xy. Where b 1 , b 2 , b 3 , and b 4 are related coefficients. The calculation process is as follows: As shown in Figure 4, Figure 4 is a schematic diagram of a bilinear interpolation model. Q 12 , Q 22 , Q 11 and Q 21 are known , but the point to be interpolated is point P, which requires bilinear interpolation. First, in the x-axis direction, perform two points on R 1 and R 2 Interpolate, and then interpolate P points according to R 1 and R 2 , which is bilinear interpolation.
可选的,对人脸轮廓进行拉伸处理的同时,还包括根据相机参数对人脸轮廓的每个像素的RGB色度分量,逐点进行色度校正和亮度校正,以减少图像采集环境中的光线、摄像头的参数等的影响。经过轮廓提取和畸变校正后的人脸图像,Optionally, while stretching the face contour, it also includes the RGB chroma component of each pixel of the face contour according to the camera parameters, chromatic correction and brightness correction point by point to reduce the image acquisition environment The effects of light, camera parameters, etc. Face image after contour extraction and distortion correction,
步骤S60,根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Step S60: Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
可选的,所述人脸图像的评测BMI值可以为BMI极低、BMI正常或者BMI极高,将评测BMI值实时返回给线下终端,例如智能手机、平板电脑、便携计算机。Optionally, the evaluation BMI value of the face image may be extremely low BMI, normal BMI or extremely high BMI, and the evaluation BMI value is returned to the offline terminal in real time, such as a smart phone, a tablet computer, or a portable computer.
本实施例提出的BMI评测方法,进一步地,在本申请方法的另一实施例中,该方法在步骤S60后还包括如下步骤:The BMI evaluation method proposed in this embodiment, further, in another embodiment of the method of the present application, the method further includes the following steps after step S60:
获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
更新迭代所述学习模型。Update and iterate the learning model.
具体的,用户在获得所述学习模型评测的评测BMI值以后,可以选择上传该用户的经过实际测量的实测BMI值,以对所述学习模型进行微调训练,实现模型的快速更新迭代。微调学习模型的时候,以前述10000个人脸样本为例,将前8000个小批量样本的学习率设置为0.001,后2000个小批量样本的学习率设置为0.0001,每次迭代的小批量大小为300,动量值设置为0.9,权重衰减值为0.0005。Specifically, after obtaining the evaluation BMI value of the learning model evaluation, the user may choose to upload the actual measured BMI value of the user to perform fine-tuning training on the learning model to achieve rapid update iteration of the model. When fine-tuning the learning model, taking the aforementioned 10,000 face sample as an example, the learning rate of the first 8000 small batch samples is set to 0.001, the learning rate of the last 2,000 small batch samples is set to 0.0001, and the small batch size of each iteration is 300, the momentum value is set to 0.9, and the weight attenuation value is 0.0005.
对不同的用户,针对其上传的经过实测的实测BMI值,增大其人脸图片在训练过程中的权重,以增强学习模型的泛化性,更好匹配该用户的实际身体情况。For different users, according to the measured BMI value uploaded by them, increase the weight of their face pictures in the training process to enhance the generalization of the learning model and better match the actual physical condition of the user.
本申请还提供一种BMI评测装置。参照图2所示,为本申请一实施例提供的装置的内部结构示意图。This application also provides a BMI evaluation device. Referring to FIG. 2, it is a schematic diagram of the internal structure of the device provided by an embodiment of the present application.
在本实施例中,装置1可以是个人电脑(Personal Computer,PC),也可以是智能手机、平板电脑、便携计算机等终端设备。该BMI评测装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。In this embodiment, the device 1 may be a personal computer (Personal Computer, PC), or may be a terminal device such as a smartphone, tablet computer, or portable computer. The BMI evaluation device 1 includes at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是BMI评测装置1的内部存储单元,例如该BMI评测装置1的硬盘。存储器11在另一些实施例中也可以是BMI评测装置1的外部存储设备,例如BMI评测装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括BMI评测装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于BMI评测装置1的应用软件及各类数据,例如BMI评测程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。Wherein, the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like. In some embodiments, the memory 11 may be an internal storage unit of the BMI evaluation device 1, such as a hard disk of the BMI evaluation device 1. In other embodiments, the memory 11 may also be an external storage device of the BMI evaluation device 1, for example, a plug-in hard disk equipped on the BMI evaluation device 1, a smart memory card (Smart Media, Card, SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc. Further, the memory 11 may also include both the internal storage unit of the BMI evaluation device 1 and the external storage device. The memory 11 can be used not only to store application software and various types of data installed in the BMI evaluation device 1, such as codes of the BMI evaluation program 01, but also to temporarily store data that has been or will be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行BMI评测程序01等。In some embodiments, the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip for running the program code or processing stored in the memory 11 Data, for example, the BMI evaluation program 01 is executed.
通信总线13用于实现这些组件之间的连接通信。The communication bus 13 is used to realize connection and communication between these components.
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。The network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the device 1 and other electronic devices.
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在BMI评测装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the device 1 may further include a user interface. The user interface may include a display and an input unit such as a keyboard. The optional user interface may also include a standard wired interface and a wireless interface. Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an organic light-emitting diode (OLED) touch device, or the like. Among them, the display may also be appropriately referred to as a display screen or a display unit, for displaying information processed in the BMI evaluation device 1 and for displaying a visual user interface.
图2仅示出了具有组件11-14以及BMI评测程序01的BMI评测装置1, 本领域技术人员可以理解的是,图1示出的结构并不构成对BMI评测装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。FIG. 2 only shows the BMI evaluation device 1 having the components 11-14 and the BMI evaluation program 01. Those skilled in the art can understand that the structure shown in FIG. 1 does not constitute a limitation on the BMI evaluation device 1, and may include There are fewer or more components than shown, or some components are combined, or different components are arranged.
在图2所示的装置1实施例中,存储器11中存储有BMI评测程序01;处理器12执行存储器11中存储的BMI评测程序01时实现如下步骤:In the embodiment of the device 1 shown in FIG. 2, the BMI evaluation program 01 is stored in the memory 11; the processor 12 implements the following steps when executing the BMI evaluation program 01 stored in the memory 11:
步骤S10,采集样本人脸图像数据,所述样本人脸图像数据包括BMI值。Step S10: Collect sample face image data, the sample face image data includes a BMI value.
步骤S20,采用卷积神经网络训练样本人脸图像数据,获得训练模型。In step S20, a convolutional neural network is used to train sample face image data to obtain a training model.
具体的,可以由线上服务器收集一万张带有BMI值的样本人脸图像,在caffe深度学习框架下训练卷积神经网络。Specifically, the online server can collect 10,000 sample face images with BMI values, and train the convolutional neural network under the caffe deep learning framework.
所述卷积神经网络的训练包括:首先将人脸图像统一剪裁为大小为224*224的图像,再转成统一的leveldb格式,最后用这些图像训练卷积神经网络VGG-16。所使用的卷积神经网络包括1个数据输入层,13个卷积层,3个全连接层。13个卷积层的卷积核个数分别为64、64、128、128、256、256、256、512、512、512、512、512、512。第2个卷积层与第3个卷积层之间、第4个卷积层与第5个卷积层之间、第7个卷积层与第8个卷积层之间、第4个卷积层与第5个卷积层之间、第10个卷积层与第11个卷积层之间、第13个卷积层和第1个全连接层之间,均连接了一个池化层,上述13个卷积层和3个全连接层均用ReLU(非线性激活函数)进行处理。将VGG-16网络模型的最后一层去掉重新训练,最后采用softmax函数输出BMI极低、BMI正常、BMI极高三个类别值的概率值,输出值为最大概率值对应的类别值。softmax函数能将一个含任意实数的K维向量z“压缩”到另一个K维实向量σ(z)中,使得每一个元素的范围都在(0,1)之间,并且所有元素的和为1。例如,输入向量[BMI极低,BMI正常,BMI极高]对应的Softmax函数的值为[0.2,0.5,0.3],那么输出向量中拥有最大权重的项对应着输入向量中的最大值“BMI正常”。The training of the convolutional neural network includes: first, uniformly cropping the face image into an image with a size of 224 * 224, and then converting into a uniform leveldb format, and finally training the convolutional neural network VGG-16 with these images. The convolutional neural network used includes 1 data input layer, 13 convolutional layers, and 3 fully connected layers. The number of convolution kernels of the 13 convolutional layers is 64, 64, 128, 128, 256, 256, 256, 512, 512, 512, 512, 512, and 512, respectively. Between the second convolution layer and the third convolution layer, between the fourth convolution layer and the fifth convolution layer, between the seventh convolution layer and the eighth convolution layer, the fourth Between the convolutional layer and the fifth convolutional layer, between the tenth convolutional layer and the eleventh convolutional layer, and between the thirteenth convolutional layer and the first fully connected layer Pooling layer, the above 13 convolutional layers and 3 fully connected layers are all processed with ReLU (Nonlinear Activation Function). The last layer of the VGG-16 network model is removed and retrained, and finally the softmax function is used to output the probability values of three categories of values: extremely low BMI, normal BMI, and extremely high BMI. The output value is the category value corresponding to the maximum probability value. The softmax function can "compress" a K-dimensional vector z containing any real number into another K-dimensional real vector σ (z), so that the range of each element is between (0, 1), and the sum of all elements Is 1. For example, if the input vector [BMI is extremely low, BMI is normal, and BMI is extremely high], the corresponding Softmax function value is [0.2, 0.5, 0.3], then the item with the largest weight in the output vector corresponds to the maximum value in the input vector "BMI normal".
步骤S30,获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点。Step S30: Acquire a face image to be detected, locate key feature points of the face in the face image to be detected, and obtain key points of the face.
具体的,可以通过线下终端的图像采集单元,如手机的摄像头,采集待检测的用户人脸图像,并通过网络传送单元,将所述人脸图像发送到线上服务器,同时将所述图像采集单元的相机参数也一并发送给线上服务器,手机上同步显示采集的人脸图像的效果图。Specifically, the image collection unit of the offline terminal, such as the camera of the mobile phone, can collect the face image of the user to be detected, and send the face image to the online server through the network transmission unit, and simultaneously send the image The camera parameters of the collection unit are also sent to the online server, and the effect image of the collected face image is displayed synchronously on the mobile phone.
具体的,对人脸图像的人脸关键特征点进行定位的方法如下。可选的,本实施例采用主动形状模型(Active Shape Model,ASM)算法对人脸关键特征点进行定位。Specifically, the method for locating the key feature points of the face in the face image is as follows. Optionally, in this embodiment, an active shape model (Active Shape Model, ASM) algorithm is used to locate key feature points of the face.
该ASM算法的基本思路是:将人脸的纹理特征和各个特征点之间的位置约束相结合。ASM算法分为训练和搜索两步。训练时,建立各个特征点的位置约束,构建各个特定点的局部特征。搜索时,迭代地进行匹配。The basic idea of the ASM algorithm is to combine the texture features of human faces with the position constraints between each feature point. The ASM algorithm is divided into training and searching steps. During training, the position constraints of each feature point are established, and the local features of each specific point are constructed. When searching, match iteratively.
ASM的训练步骤具体如下:首先,构建形状模型:搜集n个人脸的训练样本(n=400);手动标记脸部特征点,如图2所示,图2为标记脸部特征点的人脸示意图;将训练集中特征点的坐标串成特征向量;对形状进行归一化和对齐(对齐采用Procrustes方法);对对齐后的形状特征做PCA处理。所述PCA处理的基本原理为:设有m条n维数据,1)将原始数据按列组成n行m列矩阵X;2)将X的每一行(代表一个属性字段)进行零均值化,即减去这一行的均值;3)求出协方差矩阵;4)求出协方差矩阵的特征值及对应的特征向量r;5)将特征向量按对应特征值大小从上到下按行排列成矩阵,取前k行组成矩阵P;6)即为降维到k维后的数据。The training steps of ASM are as follows: First, build a shape model: collect n training samples of face (n = 400); manually mark facial feature points, as shown in Figure 2, and Figure 2 is the face labeled with facial feature points Schematic diagram; the coordinate string of the feature points in the training set is transformed into a feature vector; the shape is normalized and aligned (alignment uses Procrustes method); PCA processing is performed on the aligned shape features. The basic principle of the PCA processing is as follows: m pieces of n-dimensional data are provided, 1) the original data is composed of n rows and m columns of matrix X; 2) each row of X (representing an attribute field) is zero-averaged, That is, subtract the mean of this row; 3) find the covariance matrix; 4) find the eigenvalues of the covariance matrix and the corresponding eigenvectors; 5) arrange the eigenvectors in rows from top to bottom according to the corresponding eigenvalues Form a matrix, take the first k rows to form a matrix P; 6) is the data after dimension reduction to k dimension.
接着,为每个特征点构建局部特征。目的是在每次迭代搜索过程中每个特征点可以寻找新的位置。局部特征一般用梯度特征,以防光照变化。有的方法沿着边缘的法线方向提取,有的方法在特征点附近的矩形区域提取。Next, construct local features for each feature point. The purpose is that each feature point can find a new location during each iteration of the search process. Gradient features are generally used for local features to prevent changes in illumination. Some methods extract along the normal direction of the edge, and some methods extract in the rectangular area near the feature point.
接着进行ASM的搜索步骤,具体如下:首先,计算眼睛(或者眼睛和嘴巴)的位置,做简单的尺度和旋转变化,对齐人脸;接着,匹配每个局部特征点(常采用马氏距离),计算新的位置;得到仿射变换的参数,迭代直到收敛。另外,常采用多尺度的方法加速。搜索的过程最终收敛到高分辨率的原图像上。Then perform the ASM search steps as follows: first, calculate the position of the eyes (or eyes and mouth), make simple scale and rotation changes, and align the face; then, match each local feature point (often using Mahalanobis distance) , Calculate the new position; get the parameters of the affine transformation, and iterate until convergence. In addition, multi-scale methods are often used to accelerate. The search process eventually converges to the original high-resolution image.
步骤S40,根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓。Step S40: Perform face contour extraction based on key points of the face to obtain the face contour.
在获取能够标识眉毛、下颌的人脸关键点后,以此为依据确定眼睛、鼻子和嘴巴的相对坐标,然后进行人脸轮廓提取。可选的,采用sobel算子进行人脸轮廓提取,剔除人脸区域之外的背景。Sobel算子是一个离散微分算子,它结合了高斯平滑和微分求导,用来计算图像灰度函数的近似梯度。其基本原理是对传进来的图像像素做卷积,卷积的实质是在求梯度值,或者说给了 一个加权平均,其中权值就是所谓的卷积核;然后对生成的新像素灰度值做阈值运算,以此来确定边缘信息。图像边缘,相素值会发生显著的变化了。表示这一改变的一个方法是使用导数。梯度值的大变预示着图像中内容的显著变化。若G x是对原图x方向上的卷积,G y是对原图y方向上的卷积,原图中的作用点像素值通过卷积之后为:
Figure PCTCN2019088637-appb-000002
得到像素点新的像素值之后,给定一个阈值就可以得到sobel算子计算出的图像边缘了。如图3所示,图3是采用sobel算子进行边缘轮廓提取的效果图。
After obtaining the key points of the face that can identify the eyebrows and jaws, determine the relative coordinates of the eyes, nose and mouth based on this, and then extract the face contour. Optionally, the sobel operator is used for face contour extraction, and the background outside the face area is removed. Sobel operator is a discrete differential operator, which combines Gaussian smoothing and differential differentiation to calculate the approximate gradient of the image gray function. The basic principle is to convolve the incoming image pixels. The essence of convolution is to find the gradient value, or to give a weighted average, where the weight is the so-called convolution kernel; Threshold value is used to determine the edge information. At the edge of the image, the pixel value will change significantly. One way to express this change is to use derivatives. A large change in gradient value indicates a significant change in the content of the image. If G x is the convolution of the original image in the x direction, and G y is the convolution of the original image in the y direction, the pixel value of the action point in the original image after passing the convolution is:
Figure PCTCN2019088637-appb-000002
After obtaining the new pixel value of the pixel, given a threshold, the image edge calculated by the sobel operator can be obtained. As shown in Fig. 3, Fig. 3 is an effect diagram of edge contour extraction using the sobel operator.
步骤S50,对人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像。In step S50, the contour of the human face is stretched in proportion to the perspective perspective to obtain the pre-processed human face image.
可选的,对人脸轮廓根据透视视角采用二维线性插值算法进行等比例拉伸。假设源图像大小为m×n,目标图像为a×b。那么两幅图像的边长比分别为:m/a和n/b。注意,通常这个比例不是整数,编程存储的时候要用浮点型。目标图像的第(i,j)个像素点(i行j列)可以通过边长比对应回源图像。其对应坐标为(i×m/a,j×n/b)。显然,这个对应坐标一般来说不是整数,而非整数的坐标是无法在图像这种离散数据上使用的。双线性插值通过寻找距离这个对应坐标最近的四个像素点,来计算该点的值(灰度值或者RGB值)。若图像为灰度图像,那么(i,j)点的灰度值的数学计算模型是:,f(x,y)=b 1+b 2x+b 3y+b 4xy。其中b 1,b 2,b 3,b 4是相关的系数。关于其计算过程如下:如图4所示,图4为双线性插值的模型示意图。已知Q 12,Q 22,Q 11,Q 21,但是要插值的点为P点,这就要用双线性插值了,首先在x轴方向上,对R 1和R 2两个点进行插值,然后根据R 1和R 2对P点进行插值,这就是双线性插值。 Optionally, a two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to the perspective. Assume that the source image size is m × n and the target image is a × b. Then the side length ratios of the two images are: m / a and n / b. Note that usually this ratio is not an integer. When programming storage, use floating point. The (i, j) th pixel (i row and j column) of the target image can be returned to the source image by the side length ratio. The corresponding coordinates are (i × m / a, j × n / b). Obviously, this corresponding coordinate is generally not an integer, and non-integer coordinates cannot be used on discrete data such as images. Bilinear interpolation calculates the value (gray value or RGB value) of this point by looking for the four pixels closest to the corresponding coordinate. If the image is a gray-scale image, then the mathematical calculation model of the gray value of (i, j) point is: f (x, y) = b 1 + b 2 x + b 3 y + b 4 xy. Where b 1 , b 2 , b 3 , and b 4 are related coefficients. The calculation process is as follows: As shown in Figure 4, Figure 4 is a schematic diagram of a bilinear interpolation model. Q 12 , Q 22 , Q 11 and Q 21 are known , but the point to be interpolated is point P, which requires bilinear interpolation. First, in the x-axis direction, perform two points on R 1 and R 2 Interpolate, and then interpolate P points according to R 1 and R 2 , which is bilinear interpolation.
可选的,对人脸轮廓进行拉伸处理的同时,还包括根据相机参数对人脸轮廓的每个像素的RGB色度分量,逐点进行色度校正和亮度校正,以减少图像采集环境中的光线、摄像头的参数等的影响。经过轮廓提取和畸变校正后的人脸图像,Optionally, while stretching the face contour, it also includes the RGB chroma component of each pixel of the face contour according to the camera parameters, chromatic correction and brightness correction point by point to reduce the image acquisition environment The effects of light, camera parameters, etc. Face image after contour extraction and distortion correction,
步骤S60,根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Step S60: Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
可选的,所述人脸图像的评测BMI值可以为BMI极低、BMI正常或者 BMI极高,将评测BMI值实时返回给线下终端,例如智能手机、平板电脑、便携计算机。Optionally, the evaluation BMI value of the face image may be extremely low BMI, normal BMI, or extremely high BMI, and the evaluation BMI value is returned to an offline terminal in real time, such as a smartphone, tablet computer, or portable computer.
本实施例提出的BMI评测方法,进一步地,在本申请方法的另一实施例中,该方法在步骤S60后还包括如下步骤:The BMI evaluation method proposed in this embodiment, further, in another embodiment of the method of the present application, the method further includes the following steps after step S60:
获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
更新迭代所述学习模型。Update and iterate the learning model.
具体的,用户在获得所述学习模型评测的评测BMI值以后,可以选择上传该用户的经过实际测量的实测BMI值,以对所述学习模型进行微调训练,实现模型的快速更新迭代。微调学习模型的时候,以前述10000个人脸样本为例,将前8000个小批量样本的学习率设置为0.001,后2000个小批量样本的学习率设置为0.0001,每次迭代的小批量大小为300,动量值设置为0.9,权重衰减值为0.0005。Specifically, after obtaining the evaluation BMI value of the learning model evaluation, the user may choose to upload the actual measured BMI value of the user to perform fine-tuning training on the learning model to achieve rapid update iteration of the model. When fine-tuning the learning model, taking the aforementioned 10,000 face sample as an example, the learning rate of the first 8000 small batch samples is set to 0.001, the learning rate of the last 2,000 small batch samples is set to 0.0001, and the small batch size of each iteration is 300, the momentum value is set to 0.9, and the weight attenuation value is 0.0005.
对不同的用户,针对其上传的经过实测的实测BMI值,增大其人脸图片在训练过程中的权重,以增强学习模型的泛化性,更好匹配该用户的实际身体情况。For different users, according to the measured BMI value uploaded by them, increase the weight of their face pictures in the training process to enhance the generalization of the learning model and better match the actual physical condition of the user.
可选地,在其他实施例中,BMI评测程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述BMI评测程序在BMI评测装置中的执行过程。Optionally, in other embodiments, the BMI evaluation program may also be divided into one or more modules, and the one or more modules are stored in the memory 11 and are processed by one or more processors (in this embodiment, processing 12) is executed to complete this application. The module referred to in this application refers to a series of computer program instruction segments capable of performing specific functions, and is used to describe the execution process of the BMI evaluation program in the BMI evaluation device.
例如,参照图3所示,为本申请BMI评测装置一实施例中的BMI评测程序的程序模块示意图,该实施例中,BMI评测程序可以被分割为样本数据采集模块10、样本数据模型训练模块20、人脸关键点定位模块30,人脸轮廓提取模块40,人脸轮廓拉伸模块50,人脸图像BMI值预测模块60。For example, referring to FIG. 3, it is a schematic diagram of a program module of a BMI evaluation program in an embodiment of a BMI evaluation device of the present application. In this embodiment, the BMI evaluation program may be divided into a sample data collection module 10 and a sample data model training module 20. Face key point positioning module 30, face contour extraction module 40, face contour stretching module 50, and face image BMI value prediction module 60.
示例性地:Exemplarily:
样本数据采集模块10用于:采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;The sample data collection module 10 is used to: collect sample face image data, the sample face image data includes a BMI value;
样本数据模型训练模块20用于:采用卷积神经网络训练样本人脸图像数 据,获得训练模型;The sample data model training module 20 is used to: train the sample face image data using a convolutional neural network to obtain the training model;
人脸关键点定位模块30用于:获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;The face key point positioning module 30 is used to: acquire a face image to be detected, locate key feature points of the face to be detected, and obtain face key points;
人脸轮廓提取模块40用于:根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;The face contour extraction module 40 is used to: extract the face contour according to the key points of the face to obtain the face contour;
人脸轮廓拉伸模块50:用于对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Face contour stretching module 50: used to perform proportional stretching on the face contour according to perspective perspective to obtain pre-processed face image;
人脸图像BMI值预测模块60:用于根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Face image BMI value prediction module 60: used to perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
上述样本数据采集模块10、样本数据模型训练模块20、人脸关键点定位模块30,人脸轮廓提取模块40,人脸轮廓拉伸模块50,人脸图像BMI值预测模块60等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。The above-mentioned sample data acquisition module 10, sample data model training module 20, face key point positioning module 30, face contour extraction module 40, face contour stretching module 50, face image BMI value prediction module 60 and other program modules are executed The functions or operation steps implemented at the time are substantially the same as those in the above embodiment, and will not be repeated here.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有BMI评测程序,所述BMI评测程序可被一个或多个处理器执行,以实现如下操作:In addition, an embodiment of the present application further proposes a computer-readable storage medium on which a BMI evaluation program is stored. The BMI evaluation program may be executed by one or more processors to implement the following operations:
步骤S10,采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Step S10: Collect sample face image data, the sample face image data includes a BMI value;
步骤S20,采用卷积神经网络训练样本人脸图像数据,获得训练模型;Step S20, the convolutional neural network is used to train the sample face image data to obtain the training model;
步骤S30,获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Step S30: Acquire a face image to be detected, locate key feature points of the face to be detected, and obtain key points of the face;
步骤S40,根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;Step S40: Perform face contour extraction based on key points of the face to obtain the face contour;
步骤S50,对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Step S50, the contour of the human face is stretched in proportion to the perspective perspective to obtain a pre-processed human face image;
步骤S60,根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Step S60: Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
本申请计算机可读存储介质具体实施方式与上述BMI评测装置和方法各实施例基本相同,在此不作累述。The specific implementation of the computer-readable storage medium of the present application is basically the same as the above embodiments of the BMI evaluation device and method, and will not be described here.
本申请提出的BMI评测方法、装置及计算机可读存储介质,通过预先采集人脸图像与BMI值,经过学习算法训练网络模型,根据训练好的网络模型 预测待检测的人脸图像的BMI值,无需复杂的测量设备,就能自动完成BMI检测,不仅大大降低了BMI测量者的测量难度,还能实时测量供测量者监控自身的健康状况。The BMI evaluation method, device and computer-readable storage medium proposed in this application collect the face image and BMI value in advance, train the network model through a learning algorithm, and predict the BMI value of the face image to be detected according to the trained network model. BMI detection can be completed automatically without complicated measurement equipment, which not only greatly reduces the measurement difficulty of BMI measurers, but also enables real-time measurement for the measurers to monitor their own health status.
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that the serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments. And the terms "include", "include", or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, device, article, or method that includes a series of elements includes not only those elements, but also unclear The other elements listed may also include elements inherent to this process, device, article, or method. Without more restrictions, the element defined by the sentence "include one ..." does not exclude that there are other identical elements in the process, device, article or method that includes the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product is stored in a storage medium (such as ROM / RAM as described above) , Magnetic disks, optical disks), including several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to perform the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the present application, and do not limit the scope of the patent of the present application. Any equivalent structure or equivalent process transformation made by the description and drawings of this application, or directly or indirectly used in other related technical fields The same reason is included in the patent protection scope of this application.

Claims (20)

  1. 一种BMI评测方法,其特征在于,所述方法包括:A BMI evaluation method, characterized in that the method includes:
    采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Collect sample face image data, the sample face image data includes the BMI value;
    采用卷积神经网络训练样本人脸图像数据,获得训练模型;Use convolutional neural network to train the sample face image data to obtain the training model;
    获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, locating key feature points of the face in the face image to be detected, and acquiring key points of the face;
    根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;According to the key points of the face, extract the face contour to obtain the face contour;
    对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image;
    根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  2. 根据权利要求1所述的BMI评测方法,其特征在于,步骤对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像的同时,还包括步骤:The BMI evaluation method according to claim 1, wherein the step of proportionally stretching the contour of the face according to perspective perspective to obtain a pre-processed face image, further comprising the steps of:
    根据相机参数对人脸轮廓区域的每个像素的RGB色度分量,逐点进行色度校正和亮度校正。According to the camera parameters, the RGB chroma component of each pixel in the contour area of the human face is subjected to chroma correction and brightness correction point by point.
  3. 根据权利要求1所述的BMI评测方法,其特征在于,步骤采用卷积神经网络训练样本人脸图像数据,获得训练模型,还包括步骤:The BMI evaluation method according to claim 1, wherein the step uses a convolutional neural network to train sample face image data to obtain a training model, further comprising the steps of:
    将人脸图像剪裁成大小为224*224的图像;Cut the face image into an image of size 224 * 224;
    将所述剪裁后的图像转成leveldb格式;Convert the cropped image to leveldb format;
    用所述leveldb格式图像训练卷积神经网络VGG-16;Training the convolutional neural network VGG-16 with the image in the leveldb format;
    用softmax函数输出BMI极低、BMI正常、BMI极高三个类别值的概率值,输出值为最大概率值对应的类别值。The softmax function is used to output the probability values of three categories: BMI extremely low, BMI normal, and BMI extremely high. The output value is the category value corresponding to the maximum probability value.
  4. 根据权利要求1所述的BMI评测方法,其特征在于,步骤获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;进一步包括步骤:The BMI evaluation method according to claim 1, wherein the step of obtaining a face image to be detected, locates key feature points of the face to be detected, and obtains key points of the face; according to the face The key point is to extract the face contour to obtain the face contour; perform the same proportion stretching on the face contour according to the perspective perspective to obtain the pre-processed face image; further including the steps:
    获取待检测的人脸图像,采用主动形状模型算法对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, and using an active shape model algorithm to locate key feature points of the face to be detected, to obtain key points of the face;
    根据人脸关键点,采用sobel算子进行人脸轮廓提取,剔除人脸区域之外的背景,获取人脸轮廓;According to the key points of the face, the sobel operator is used to extract the face contour, and the background outside the face area is removed to obtain the face contour;
    对所述人脸轮廓根据透视视角采用二维线性插值算法进行等比例拉伸,获得预处理后人脸图像。The two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to a perspective perspective to obtain a pre-processed human face image.
  5. 根据权利要求1所述的BMI评测方法,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation method according to claim 1, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected, further Including steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  6. 根据权利要求2所述的BMI评测方法,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation method according to claim 2, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected, further Including steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  7. 根据权利要求3或4所述的BMI评测方法,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation method according to claim 3 or 4, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected , Also includes steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  8. 一种BMI评测装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的BMI评测程序,所述BMI评测程序被所述处理器执行时实现如下步骤:A BMI evaluation device, characterized in that the device includes a memory and a processor, and the memory stores a BMI evaluation program that can be run on the processor, when the BMI evaluation program is executed by the processor Implement the following steps:
    采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Collect sample face image data, the sample face image data includes the BMI value;
    采用卷积神经网络训练样本人脸图像数据,获得训练模型;Use convolutional neural network to train the sample face image data to obtain the training model;
    获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, locating key feature points of the face in the face image to be detected, and acquiring key points of the face;
    根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;According to the key points of the face, extract the face contour to obtain the face contour;
    对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image;
    根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  9. 根据权利要求8所述的BMI评测装置,其特征在于,步骤对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像的同时,还包括步骤:The BMI evaluation device according to claim 8, characterized in that the step of proportionally stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image, further comprising the steps of:
    根据相机参数对人脸轮廓区域的每个像素的RGB色度分量,逐点进行色度校正和亮度校正。According to the camera parameters, the RGB chroma component of each pixel in the contour area of the human face is subjected to chroma correction and brightness correction point by point.
  10. 根据权利要求8所述的BMI评测装置,其特征在于,步骤采用卷积神经网络训练样本人脸图像数据,获得训练模型,还包括步骤:The BMI evaluation device according to claim 8, characterized in that the step uses a convolutional neural network to train sample face image data to obtain a training model, further comprising the steps of:
    将人脸图像剪裁成大小为224*224的图像;Cut the face image into an image of size 224 * 224;
    将所述剪裁后的图像转成leveldb格式;Convert the cropped image to leveldb format;
    用所述leveldb格式图像训练卷积神经网络VGG-16;Training the convolutional neural network VGG-16 with the image in the leveldb format;
    用softmax函数输出BMI极低、BMI正常、BMI极高三个类别值的概率值,输出值为最大概率值对应的类别值。The softmax function is used to output the probability values of three categories: BMI extremely low, BMI normal, and BMI extremely high. The output value is the category value corresponding to the maximum probability value.
  11. 根据权利要求8所述的BMI评测装置,其特征在于,步骤获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;进一步包括步骤:The BMI evaluation device according to claim 8, wherein the step of obtaining a face image to be detected, locates key feature points of the face to be detected, and obtains key points of the face; according to the face The key point is to extract the face contour to obtain the face contour; perform the same proportion stretching on the face contour according to the perspective perspective to obtain the pre-processed face image; further including the steps:
    获取待检测的人脸图像,采用主动形状模型算法对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, and using an active shape model algorithm to locate key feature points of the face to be detected, to obtain key points of the face;
    根据人脸关键点,采用sobel算子进行人脸轮廓提取,剔除人脸区域之外的背景,获取人脸轮廓;According to the key points of the face, the sobel operator is used to extract the face contour, and the background outside the face area is removed to obtain the face contour;
    对所述人脸轮廓根据透视视角采用二维线性插值算法进行等比例拉伸,获得预处理后人脸图像。The two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to a perspective perspective to obtain a pre-processed human face image.
  12. 根据权利要求8所述的BMI评测装置,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation device according to claim 8, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected, further Including steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  13. 根据权利要求9所述的BMI评测装置,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation device according to claim 9, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected, further Including steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  14. 根据权利要求10或11所述的BMI评测装置,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The BMI evaluation device according to claim 10 or 11, wherein after the step of performing category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected , Also includes steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有BMI评测程序,所述程序可被一个或者多个处理器执行,以实现如下步骤:A computer-readable storage medium, characterized in that a BMI evaluation program is stored on the computer-readable storage medium, and the program can be executed by one or more processors to implement the following steps:
    采集样本人脸图像数据,所述样本人脸图像数据包括BMI值;Collect sample face image data, the sample face image data includes the BMI value;
    采用卷积神经网络训练样本人脸图像数据,获得训练模型;Use convolutional neural network to train the sample face image data to obtain the training model;
    获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, locating key feature points of the face in the face image to be detected, and acquiring key points of the face;
    根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;According to the key points of the face, extract the face contour to obtain the face contour;
    对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;Stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image;
    根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值。Perform category prediction on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected.
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,步骤对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像的同时,还包括步骤:The computer-readable storage medium according to claim 15, wherein the step of proportionally stretching the contour of the face according to a perspective perspective to obtain a pre-processed face image, further comprising the steps of:
    根据相机参数对人脸轮廓区域的每个像素的RGB色度分量,逐点进行色度校正和亮度校正。According to the camera parameters, the RGB chroma component of each pixel in the contour area of the human face is subjected to chroma correction and brightness correction point by point.
  17. 根据权利要求15所述的计算机可读存储介质,其特征在于,步骤采用卷积神经网络训练样本人脸图像数据,获得训练模型,还包括步骤:The computer-readable storage medium according to claim 15, wherein the step uses a convolutional neural network to train the sample face image data to obtain the training model, further comprising the steps of:
    将人脸图像剪裁成大小为224*224的图像;Cut the face image into an image of size 224 * 224;
    将所述剪裁后的图像转成leveldb格式;Convert the cropped image to leveldb format;
    用所述leveldb格式图像训练卷积神经网络VGG-16;Training the convolutional neural network VGG-16 with the image in the leveldb format;
    用softmax函数输出BMI极低、BMI正常、BMI极高三个类别值的概率值,输出值为最大概率值对应的类别值。The softmax function is used to output the probability values of three categories: BMI extremely low, BMI normal, and BMI extremely high. The output value is the category value corresponding to the maximum probability value.
  18. 根据权利要求15所述的计算机可读存储介质,其特征在于,步骤获取待检测的人脸图像,对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;根据人脸关键点,进行人脸轮廓提取,获取人脸轮廓;对所述人脸轮廓根据透视视角进行等比例拉伸,获得预处理后人脸图像;进一步包括步骤:The computer-readable storage medium according to claim 15, wherein the step of obtaining a face image to be detected, locates key feature points of the face of the face image to be detected, and obtains key points of the face; Face key points, extract the face contour to obtain the face contour; stretch the face contour according to the perspective perspective to obtain the pre-processed face image; further include the steps of:
    获取待检测的人脸图像,采用主动形状模型算法对所述待检测人脸图像的人脸关键特征点进行定位,获取人脸关键点;Acquiring a face image to be detected, and using an active shape model algorithm to locate key feature points of the face to be detected, to obtain key points of the face;
    根据人脸关键点,采用sobel算子进行人脸轮廓提取,剔除人脸区域之外的背景,获取人脸轮廓;According to the key points of the face, the sobel operator is used to extract the face contour, and the background outside the face area is removed to obtain the face contour;
    对所述人脸轮廓根据透视视角采用二维线性插值算法进行等比例拉伸,获得预处理后人脸图像。The two-dimensional linear interpolation algorithm is used to proportionally stretch the contour of the human face according to a perspective perspective to obtain a pre-processed human face image.
  19. 根据权利要求15或16所述的计算机可读存储介质,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The computer-readable storage medium according to claim 15 or 16, wherein the category prediction is performed on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected After the steps, there are also steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
  20. 根据权利要求17或18所述的计算机可读存储介质,其特征在于,所述根据所述训练模型对所述预处理后人脸图像进行类别预测,得到待检测人脸图像的评测BMI值的步骤之后,还包括步骤:The computer-readable storage medium according to claim 17 or 18, wherein the class prediction is performed on the pre-processed face image according to the training model to obtain the evaluation BMI value of the face image to be detected After the steps, there are also steps:
    获取用户上传的实测BMI值;Obtain the measured BMI value uploaded by the user;
    将所述评测BMI值与所述实测BMI值采用卷积网络模型进行微调训练;Use the convolutional network model to fine-tune the evaluation BMI value and the measured BMI value;
    更新迭代所述训练模型。Update and iterate the training model.
PCT/CN2019/088637 2018-11-20 2019-05-27 Bmi evaluation method and device, and computer readable storage medium WO2020103417A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811384493.8 2018-11-20
CN201811384493.8A CN109637664A (en) 2018-11-20 2018-11-20 A kind of BMI evaluating method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020103417A1 true WO2020103417A1 (en) 2020-05-28

Family

ID=66068616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088637 WO2020103417A1 (en) 2018-11-20 2019-05-27 Bmi evaluation method and device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109637664A (en)
WO (1) WO2020103417A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916219A (en) * 2020-07-17 2020-11-10 深圳中集智能科技有限公司 Intelligent safety early warning method, device and electronic system for inspection and quarantine
CN112529888A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Face image evaluation method, device, equipment and medium based on deep learning
CN116433700A (en) * 2023-06-13 2023-07-14 山东金润源法兰机械有限公司 Visual positioning method for flange part contour
CN112529888B (en) * 2020-12-18 2024-04-30 平安科技(深圳)有限公司 Face image evaluation method, device, equipment and medium based on deep learning

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109637664A (en) * 2018-11-20 2019-04-16 平安科技(深圳)有限公司 A kind of BMI evaluating method, device and computer readable storage medium
CN110082283B (en) * 2019-05-23 2021-12-14 山东科技大学 Atmospheric particulate SEM image recognition method and system
CN110570442A (en) * 2019-09-19 2019-12-13 厦门市美亚柏科信息股份有限公司 Contour detection method under complex background, terminal device and storage medium
CN112582063A (en) * 2019-09-30 2021-03-30 长沙昱旻信息科技有限公司 BMI prediction method, device, system, computer storage medium, and electronic apparatus
CN113436735A (en) * 2020-03-23 2021-09-24 北京好啦科技有限公司 Body weight index prediction method, device and storage medium based on face structure measurement
CN111861875A (en) * 2020-07-30 2020-10-30 北京金山云网络技术有限公司 Face beautifying method, device, equipment and medium
CN112067054A (en) * 2020-09-15 2020-12-11 中山大学 Intelligent dressing mirror based on BMI detects
CN113191189A (en) * 2021-03-22 2021-07-30 深圳市百富智能新技术有限公司 Face living body detection method, terminal device and computer readable storage medium
CN113591704B (en) * 2021-07-30 2023-08-08 四川大学 Body mass index estimation model training method and device and terminal equipment
CN114496263B (en) * 2022-04-13 2022-07-12 杭州研极微电子有限公司 Neural network model establishing method and storage medium for body mass index estimation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180289334A1 (en) * 2017-04-05 2018-10-11 doc.ai incorporated Image-based system and method for predicting physiological parameters
CN108875590A (en) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 BMI prediction technique, device, computer equipment and storage medium
CN109637664A (en) * 2018-11-20 2019-04-16 平安科技(深圳)有限公司 A kind of BMI evaluating method, device and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123B (en) * 2014-02-13 2018-02-06 北京师范大学 A kind of three-dimensional face change modeling method
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
CN108182384B (en) * 2017-12-07 2020-09-29 浙江大华技术股份有限公司 Face feature point positioning method and device
CN108629303A (en) * 2018-04-24 2018-10-09 杭州数为科技有限公司 A kind of shape of face defect identification method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180289334A1 (en) * 2017-04-05 2018-10-11 doc.ai incorporated Image-based system and method for predicting physiological parameters
CN108875590A (en) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 BMI prediction technique, device, computer equipment and storage medium
CN109637664A (en) * 2018-11-20 2019-04-16 平安科技(深圳)有限公司 A kind of BMI evaluating method, device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KOCABEY, E. ET AL.: "Face-to-BMI: Using Computer Vision to Infer Body Mass Index on Social Media", PROCEEDINGS OF THE ELEVENTH INTERNATIONAL AAAI CONFERENCE ON WEB AND SOCIAL MEDIA (ICWSM 2017), 9 March 2017 (2017-03-09), pages 572 - 575, XP055709668 *
WEN, LINGYUN ET AL.: "A Computational Approach to Body Mass Index Prediction from Face Images", IMAGE AND VISION COMPUTING, vol. 31, no. 5, 2 April 2013 (2013-04-02), pages 392 - 400, XP055476402, DOI: 10.1016/j.imavis.2013.03.001 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111916219A (en) * 2020-07-17 2020-11-10 深圳中集智能科技有限公司 Intelligent safety early warning method, device and electronic system for inspection and quarantine
CN112529888A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Face image evaluation method, device, equipment and medium based on deep learning
CN112529888B (en) * 2020-12-18 2024-04-30 平安科技(深圳)有限公司 Face image evaluation method, device, equipment and medium based on deep learning
CN116433700A (en) * 2023-06-13 2023-07-14 山东金润源法兰机械有限公司 Visual positioning method for flange part contour
CN116433700B (en) * 2023-06-13 2023-08-18 山东金润源法兰机械有限公司 Visual positioning method for flange part contour

Also Published As

Publication number Publication date
CN109637664A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
WO2020103417A1 (en) Bmi evaluation method and device, and computer readable storage medium
Yani et al. Application of transfer learning using convolutional neural network method for early detection of terry’s nail
CN110662484B (en) System and method for whole body measurement extraction
US11017547B2 (en) Method and system for postural analysis and measuring anatomical dimensions from a digital image using machine learning
US10679046B1 (en) Machine learning systems and methods of estimating body shape from images
JP2022521844A (en) Systems and methods for measuring weight from user photos using deep learning networks
JP6624794B2 (en) Image processing apparatus, image processing method, and program
WO2016150240A1 (en) Identity authentication method and apparatus
CN104008375B (en) The integrated face identification method of feature based fusion
JP5762730B2 (en) Human detection device and human detection method
CN113362382A (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
CN104318243B (en) High-spectral data dimension reduction method based on rarefaction representation and empty spectrum Laplce's figure
WO2021012494A1 (en) Deep learning-based face recognition method and apparatus, and computer-readable storage medium
JP2018055470A (en) Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system
CN107194361A (en) Two-dimentional pose detection method and device
CN112669348B (en) Fish body posture estimation and fish body surface type data measurement method and device
CN108875459A (en) One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
WO2021087425A1 (en) Methods and systems for generating 3d datasets to train deep learning networks for measurements estimation
CN106886754B (en) Object identification method and system under a kind of three-dimensional scenic based on tri patch
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
CN107832695A (en) The optic disk recognition methods based on textural characteristics and device in retinal images
US11495049B2 (en) Biometric feature reconstruction method, storage medium and neural network
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
CN107316025B (en) Hand gesture recognition method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19887660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/09/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19887660

Country of ref document: EP

Kind code of ref document: A1