CN110472509B - Fat-lean recognition method and device based on face image and electronic equipment - Google Patents

Fat-lean recognition method and device based on face image and electronic equipment Download PDF

Info

Publication number
CN110472509B
CN110472509B CN201910636941.7A CN201910636941A CN110472509B CN 110472509 B CN110472509 B CN 110472509B CN 201910636941 A CN201910636941 A CN 201910636941A CN 110472509 B CN110472509 B CN 110472509B
Authority
CN
China
Prior art keywords
face image
fat
thin
applicant
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910636941.7A
Other languages
Chinese (zh)
Other versions
CN110472509A (en
Inventor
李宝林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201910636941.7A priority Critical patent/CN110472509B/en
Publication of CN110472509A publication Critical patent/CN110472509A/en
Application granted granted Critical
Publication of CN110472509B publication Critical patent/CN110472509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of intelligent decision making, in particular to a fat-thin recognition method based on face images. The method comprises the following steps: performing fat-thin type marking on each acquired face image sample; a training set is formed by a plurality of face image samples with marked fat and thin types; training the convolutional neural network by using a training set to obtain a classification model; acquiring a face image of an applicant; denoising the face image to obtain a face image to be detected; and inputting the face image to be tested into a classification model to obtain a classification result of the face image to be tested, wherein the classification result is used for describing the fat and thin type of the applicant. Therefore, the classification model is constructed, the fat and thin type of the applicant can be automatically identified based on the face image of the applicant, the check can be assisted by the applicant, the labor cost can be saved, and the application checking efficiency can be improved.

Description

Fat-lean recognition method and device based on face image and electronic equipment
Technical Field
The invention relates to the technical field of intelligent decision making, in particular to a face image-based fat-thin recognition method and device and electronic equipment.
Background
In the current insurance industry, an insurance company generally carries out auditing on the physical health condition, medical record, body fat and thin condition and the like of an applicant through a check-up person after the applicant submits an application for insurance, so as to prevent the applicant from suffering from diseases and maliciously cheating.
The existing auditing method for the body fat-lean condition is to combine the related information (such as high body weight, weight or fat-lean self-description and the like) of the body fat-lean condition provided by the applicant with the face image of the applicant to conduct manual auditing, so that the manpower cost is more, and the auditing efficiency is low.
In addition, since the probability of being ill is higher for the person who is excessively obese than for the normal person, there is also information about the condition of being fat and thin for the fevered applicant to pass the examination, so that the examination is easy to be inaccurate.
Disclosure of Invention
In order to solve the problem that the application auditing efficiency is too low in the related technology, the invention provides a face image-based fat-thin recognition method and device and electronic equipment.
The first aspect of the embodiment of the invention discloses a fat-lean identification method based on a face image, which comprises the following steps:
Performing fat-thin type marking on each acquired face image sample;
a training set is formed by a plurality of face image samples with marked fat and thin types;
Training the convolutional neural network by using the training set to obtain a classification model;
Acquiring a face image of an applicant;
Denoising the face image to obtain a face image to be detected;
Inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, wherein the classification result is used for describing the fat and thin type of the applicant.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the inputting the face image to be measured into the classification model and obtaining the classification result of the face image to be measured, the method further includes:
judging whether the classification result meets a preset target application condition matched with the fat-lean condition of the body or not;
if the target application condition is met, outputting first prompt information, wherein the first prompt information is used for describing that the body fat and thin condition of the applicant is checked and passed;
and if the target application condition is not met, outputting second prompt information, wherein the second prompt information is used for describing that the body fat and thin condition of the applicant is checked and not passed.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the fat-thin type is over-fat, normal fat-thin or over-thin; the step of judging whether the classification result meets a preset target application condition matched with the fat and thin condition of the body or not comprises the following steps:
judging whether the classification result is used for describing the normal fat and thin of the applicant;
if the classification result is used for describing that the applicant is normal fat and thin, judging that the classification result meets a preset target application condition matched with the body fat and thin condition;
if the classification result is not used for describing that the applicant is normal fat or thin, judging that the classification result does not meet a preset target application condition matched with the fat or thin condition of the body.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the training the convolutional neural network with the training set to obtain the classification model, the method further includes:
judging whether rare samples exist in the training set or not;
If the sparse samples exist, increasing the number of the samples of the category corresponding to the sparse samples to obtain a uniform training set;
And training the convolutional neural network by using the training set to obtain a classification model, wherein the training set comprises the following steps: and training the convolutional neural network by using the uniform training set to obtain a classification model.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the training the convolutional neural network with the training set to obtain a classification model includes:
Inputting the face image sample in the training set into a convolutional neural network, and outputting an actual output result of the face image sample;
Obtaining an ideal output result according to the fat-thin type corresponding to the face image sample;
calculating a difference value between the actual output result and the ideal output result;
Back propagation is carried out according to the difference value so as to correct the parameter weight of the convolutional neural network;
repeating the steps until the difference value meets the preset condition to obtain the classification model.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the classification model includes a convolution layer, an excitation layer, and a full connection layer; inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, wherein the method comprises the following steps:
inputting the face image to be detected into the convolution layer, and adopting a plurality of convolution cores in the convolution layer to extract the characteristics of the face image to be detected so as to obtain a characteristic image;
inputting the characteristic image into the excitation layer, and carrying out nonlinear space mapping on the characteristic image in the excitation layer to obtain a characteristic vector;
And inputting the feature vector into the full-connection layer, and carrying out feature integration on the feature vector in the full-connection layer to obtain a classification result of the face image to be detected.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, the denoising processing is performed on the face image to obtain a face image to be measured, including:
Acquiring an image signal to be processed of the face image;
performing low-pass filtering processing on the image signal to be processed to obtain a low-frequency image signal;
Calculating the difference value between the image signal to be processed and the low-frequency image signal to obtain a high-frequency image signal;
Carrying out denoising filtering treatment on the low-frequency image signal and the high-frequency image signal by adopting a non-local mean value method to obtain a denoising low-frequency image signal and a denoising high-frequency image signal;
summing the denoising low-frequency image signal and the denoising high-frequency image signal to obtain a denoising image signal;
And obtaining a face image to be detected according to the denoising image signal.
The second aspect of the embodiment of the invention discloses a fat-thin recognition device based on a face image, which comprises:
The marking unit is used for marking the fat and thin type of each acquired face image sample;
The training unit is used for forming a training set by a plurality of face image samples of marked fat-thin type; training the convolutional neural network by using the training set to obtain a classification model;
The acquisition unit is used for acquiring the face image of the applicant;
the denoising unit is used for denoising the face image to obtain a face image to be detected;
The identification unit is used for inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, and the classification result is used for describing the fat-lean type of the applicant.
A third aspect of the embodiment of the present invention discloses an electronic device, including:
A processor;
and the memory is stored with computer readable instructions, and when the computer readable instructions are executed by the processor, the fat-thin recognition method based on the face image disclosed in the first aspect of the embodiment of the invention is realized.
A fourth aspect of the embodiment of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute the fat-thin recognition method based on a face image disclosed in the first aspect of the embodiment of the present invention.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
The fat-lean recognition method based on the face image provided by the invention comprises the following steps: performing fat-thin type marking on each acquired face image sample; a training set is formed by a plurality of face image samples with marked fat and thin types; training the convolutional neural network by using a training set to obtain a classification model; acquiring a face image of an applicant; denoising the face image to obtain a face image to be detected; and inputting the face image to be tested into a classification model to obtain a classification result of the face image to be tested, wherein the classification result is used for describing the fat and thin type of the applicant.
According to the method, the classification model is constructed, so that the fat and thin type of the applicant can be automatically identified based on the face image of the applicant, the check can be assisted by the applicant, the human cost can be saved, and the application checking efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic structural diagram of a fat-thin recognition device based on a face image according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a fat-thin recognition method based on a face image according to an embodiment of the invention;
FIG. 3 is a flow chart of another face image-based fat-thin recognition method according to an embodiment of the present invention;
FIG. 4 is a flow chart of another fat-lean recognition method based on face images according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Example 1
The implementation environment of the invention can be an electronic device, such as a smart phone, a tablet computer, a desktop computer. The electronic device may receive the face image of the applicant uploaded by the underwriting person, or may receive the face image uploaded by the applicant, which is not specifically limited herein.
Fig. 1 is a schematic structural diagram of a fat-thin recognition device based on a face image according to an embodiment of the present invention. The apparatus 100 may be the electronic device described above. As shown in fig. 1, the apparatus 100 may include one or more of the following components: a processing component 102, a memory 104, a power supply component 106, a multimedia component 108, an audio component 110, a sensor component 114, and a communication component 116.
The processing component 102 generally controls overall operation of the device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations, among others. The processing component 102 may include one or more processors 118 to execute instructions to perform all or part of the steps of the methods described below. Further, the processing component 102 can include one or more modules to facilitate interactions between the processing component 102 and other components. For example, the processing component 102 may include a multimedia module for facilitating interaction between the multimedia component 108 and the processing component 102.
The memory 104 is configured to store various types of data to support operations at the apparatus 100. Examples of such data include instructions for any application or method operating on the device 100. The Memory 104 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static RandomAccess Memory, SRAM), electrically erasable Programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. Also stored in the memory 104 are one or more modules configured to be executed by the one or more processors 118 to perform all or part of the steps in the methods shown below.
The power supply assembly 106 provides power to the various components of the device 100. The power components 106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 100.
The multimedia component 108 includes a screen between the device 100 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. The screen may also include an Organic LIGHT EMITTING DISPLAY (OLED for short).
The audio component 110 is configured to output and/or input audio signals. For example, the audio component 110 includes a Microphone (MIC) configured to receive external audio signals when the device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 104 or transmitted via the communication component 116. In some embodiments, the audio component 110 further comprises a speaker for outputting audio signals.
The sensor assembly 114 includes one or more sensors for providing status assessment of various aspects of the device 100. For example, the sensor assembly 114 may detect an on/off state of the device 100, a relative positioning of the assemblies, the sensor assembly 114 may also detect a change in position of the device 100 or a component of the device 100, and a change in temperature of the device 100. In some embodiments, the sensor assembly 114 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 116 is configured to facilitate communication between the apparatus 100 and other devices in a wired or wireless manner. The device 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity). In an embodiment of the present invention, the communication component 116 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an embodiment of the present invention, the Communication component 116 further includes a Near Field Communication (NFC) module for facilitating short range communications. For example, the NFC module may be implemented based on radio frequency identification (Radio Frequency Identification, RFID for short), infrared data association (Infrared DataAssociation, irDA for short), ultra wideband (UltraWideband, UWB for short), bluetooth, and other technologies.
In an exemplary embodiment, the apparatus 100 may be implemented by one or more Application SPECIFIC INTEGRATED Circuits (ASICs), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors, or other electronic components for performing the methods described below.
Example two
Referring to fig. 2, fig. 2 is a flow chart of a fat-lean recognition method based on a face image according to an embodiment of the invention. The fat-lean recognition method based on the face image as shown in fig. 2 may include the steps of:
201. and (5) performing fat and thin type marking on each acquired face image sample.
In the embodiment of the invention, the fat-lean type can be over-fat, normal fat-lean or over-lean. Then, the fat-lean type flag includes: the individual face image samples are marked as either too fat, normal fat-lean or too lean.
202. A training set is composed of a plurality of face image samples of marked fat-lean type.
203. And training the convolutional neural network by using the training set to obtain a classification model.
In the embodiment of the present invention, after step 203 is performed, the process may go to step 206, and step 204 may also be performed. In this embodiment, step 204 is performed after step 203 is performed, and in other possible embodiments, step 206 may be performed after step 203, and embodiments of the present invention are not limited in particular.
As an alternative embodiment, the following steps may also be performed before performing step 203: a convolutional neural network is constructed that may include, in order, an input layer, a plurality of convolutional layers, an excitation layer, at least one fully-connected layer, and an output layer.
Based on this, training the convolutional neural network with the training set in step 203 may include: training the convolutional neural network by using the training set, expanding the fat-thin characteristic difference between a plurality of face image samples with fat-thin type labels of different categories, and reducing the fat-thin characteristic difference between a plurality of face image samples with fat-thin type labels of the same category.
By implementing the embodiment, the generalization capability and the recognition accuracy of the classification model can be improved based on the strong computing capability of the convolutional neural network.
204. And acquiring a face image of the applicant.
As an optional implementation manner, after the face image of the applicant is acquired, the face image may be compared with a pre-stored face image to identify whether the face image contains a face; if yes, go to step 205; otherwise, the process is ended.
205. And denoising the face image to obtain the face image to be detected.
As an alternative embodiment, step 205 may comprise the steps of: acquiring an image signal to be processed of a face image; performing low-pass filtering processing on the image signal to be processed to obtain a low-frequency image signal; calculating the difference value between the image signal to be processed and the low-frequency image signal to obtain a high-frequency image signal; carrying out denoising filtering treatment on the low-frequency image signal and the high-frequency image signal by adopting a non-local mean value method to obtain a denoising low-frequency image signal and a denoising high-frequency image signal; summing the denoising low-frequency image signal and the denoising high-frequency image signal to obtain a denoising image signal; and obtaining the face image to be detected according to the denoising image signal.
By implementing the embodiment, the face image is divided into the low-frequency image signal and the high-frequency image signal and then is subjected to denoising filtering treatment respectively, so that the denoising treatment process is simple, and the insurance auditing efficiency is improved.
206. And inputting the face image to be tested into a classification model to obtain a classification result of the face image to be tested, wherein the classification result is used for describing the fat and thin type of the applicant.
It should be noted that, the classification model is used for classifying the fat and thin types of the face image according to the characteristics of the face image to be detected. The specific implementation mode can adopt a softmax classifier, the softmax classifier can calculate probability distribution of different fat-thin types, and the type of the fat-thin type to which the face image belongs is judged according to the different probability distribution.
Implementing the method described in fig. 2, by marking the fat-thin type of each face image sample acquired; a training set is formed by a plurality of face image samples with marked fat and thin types; training the convolutional neural network by using a training set to obtain a classification model; acquiring a face image of an applicant; denoising the face image to obtain a face image to be detected; and inputting the face image to be tested into a classification model to obtain a classification result of the face image to be tested, wherein the classification result is used for describing the fat and thin type of the applicant. Therefore, the classification model is constructed, the fat and thin type of the applicant can be automatically identified based on the face image of the applicant, the verification of the fat and thin condition of the applicant by the security applicant can be assisted, the labor cost can be saved, and the security application verification efficiency can be improved.
Example III
Referring to fig. 3, fig. 3 is a flow chart of another fat-lean recognition method based on face images according to an embodiment of the present invention. As shown in fig. 3, the fat-thin recognition method based on the face image may include the following steps:
301 to 302. Steps 301 to 302 are the same as steps 201 to 202 described in the second embodiment, and are not described again in the embodiments of the present invention.
303. Judging whether rare samples exist in the training set. If yes, go to step 304; otherwise, the process is ended. Wherein the rare samples may be of any type of fat-lean type, and the rare samples refer to the number of samples in the type of fat-lean type being smaller than the number of samples in the other types.
304. And increasing the number of samples of the category corresponding to the rare samples to obtain a uniform training set.
In the embodiment of the invention, whether the sparse samples exist in the training set is judged, if so, the number of the sparse samples is increased according to the types of the sparse samples, so that the distribution of the sparse sample set and the normal sample set in the training set is homogenized, and a uniform training set after uniformity is obtained.
305. And training the convolutional neural network by using the uniform training set to obtain a classification model.
By implementing the steps 303 to 305, sample data of the training set can be homogenized, and the classification model can be obtained by training the homogenized and homogenized training set, so that the recognition accuracy of the classification model can be improved.
306-308. Steps 306 to 308 are the same as steps 204 to 206 described in the second embodiment, and the embodiments of the present invention will not be repeated.
309. Judging whether the classification result meets a preset target application condition matched with the fat and thin condition of the body. If yes, go to step 310; otherwise, step 311 is performed.
In the embodiment of the invention, the preset target application condition matched with the body fat-lean condition can be a classification result corresponding to any fat-lean type set by development business personnel according to actual conditions. For example, the development business person may set a classification result for describing that the applicant is normal fat or thin as the target application condition.
Thus, as an alternative embodiment, step 309 may comprise: judging whether the classification result is used for describing normal fatness and thinness of the applicant; if yes, judging that the classification result meets a preset target application condition matched with the body fat and lean condition; otherwise, judging that the classification result does not meet the preset target application condition matched with the fat and thin condition of the body.
310. Outputting first prompt information, wherein the first prompt information is used for describing that the body fat and thin condition of the applicant is checked and passed.
311. Outputting second prompt information, wherein the second prompt information is used for describing that the body fat and thin condition of the applicant is not checked.
By implementing the steps 309-311, whether the fat-thin type of the applicant meets the target application condition or not can be automatically identified and whether the examination passes is prompted by judging whether the classification result meets the preset target application condition matched with the fat-thin condition of the body, so that the application examination efficiency is improved.
Therefore, by implementing the method described in fig. 3, the check-up personnel can be assisted to check the body fat and thin condition of the applicant, the labor cost can be saved, and the application checking efficiency is improved.
In addition, when the sparse samples exist in the training set, the number of the sparse samples is increased according to the types of the sparse samples, so that the distribution of the sparse sample set and the normal sample set in the training set is uniform, and the uniform training set after uniform training is utilized to train and obtain the classification model, so that the recognition accuracy of the classification model can be improved.
In addition, whether the classification result meets the preset target application condition matched with the body fat-lean condition or not can be judged, namely whether the fat-lean type of the applicant meets the target application condition or not is automatically recognized, and whether the examination is passed or not is prompted, so that the application examination efficiency is improved.
Example IV
Referring to fig. 4, fig. 4 is a flow chart of another fat-lean recognition method based on a face image according to an embodiment of the invention. The fat-lean recognition method based on the face image as shown in fig. 4 may include the steps of:
401-402. Steps 401 to 402 are the same as steps 201 to 202 described in the second embodiment, and are not described again in the embodiments of the present invention.
As an alternative embodiment, the collected face images may also be preprocessed before performing step 401. Specifically, each collected face image is scaled into a face image with a preset size, and converted into a gray scale image. The preset size may be preset by a developer according to actual situations or requirements of users, for example, may be any size between 48×48 pixels and 256×256 pixels, and is preferably a size of 90×90 pixels.
By implementing this embodiment, the data amount can be increased, and the model overfitting can be prevented.
403. And inputting the face image samples in the training set into a convolutional neural network, and outputting the actual output result of the face image samples.
In an embodiment of the present invention, the training process for the classification model may include two phases, one being a forward propagation phase and the other being a backward propagation phase. The forward propagation phase is described in step 403, and a specific embodiment may be: sending the face image samples with the known fat-thin type marks in the training set into a convolutional neural network, and performing convolutional calculation on the face image samples by adopting a plurality of convolutional checks in the convolutional neural network to obtain characteristic image samples; and performing feature mapping on the feature image sample to obtain an actual output result of the face image sample. In addition, the back propagation phase includes steps 404-407.
The actual output result is the predicted fat-thin type distribution probability of the face image sample.
404. And obtaining an ideal output result according to the fat-thin type corresponding to the face image sample.
The ideal output result is the true fat-thin type distribution probability of the face image sample.
405. And calculating the difference between the actual output result and the ideal output result.
It should be noted that, the difference between the actual output result and the ideal output result may reversely adjust all weights in the convolutional neural network, and both the actual output result and the ideal output result may be represented by probability values. Thus, the function used to calculate the difference between the actual output result and the ideal result may be:
Re Loss(p,q)=-∑xp(x)logq(x);
Wherein x is the characteristic value of the characteristic image sample, p (x) is the ideal output result, and q (x) is the actual output result.
406. And carrying out back propagation according to the difference value to correct the parameter weight of the convolutional neural network.
407. And judging whether the difference value meets a preset condition. If yes, go to step 408; otherwise, the process goes to step 403.
By implementing the steps 403 to 407, the model is trained by adopting a back propagation algorithm in the training process of the model, and all weights in the convolutional neural network are continuously and reversely adjusted, so that the recognition accuracy of the model can be improved.
408-409. Steps 408 to 409 are the same as steps 204 to 205 described in the second embodiment, and are not described again in the embodiments of the present invention.
410. And inputting the face image to be detected into a convolution layer, and adopting a plurality of convolution cores in the convolution layer to extract the characteristics of the face image to be detected, so as to obtain the characteristic image.
It is understood that the convolutional layer, the excitation layer, and the fully-connected layer all belong to network layers in the convolutional neural network. The embodiment of the invention utilizes the strong feature extraction capability of the convolutional neural network, extracts the fat and thin features in the face image to be detected, and ensures the accuracy of the final classification result.
411. And inputting the characteristic image into an excitation layer, and carrying out nonlinear space mapping on the characteristic image in the excitation layer to obtain a characteristic vector.
412. And inputting the feature vectors into a full-connection layer, and carrying out feature integration on the feature vectors in the full-connection layer to obtain a classification result of the face image to be detected.
In the embodiment of the present invention, a sigmoid activation function is preferably used in the full connection layer, and of course, other activation functions may also be used, which activation function is specifically adopted, and the present invention is not limited in particular.
Therefore, by implementing the method described in fig. 4, the check-up personnel can be assisted to check the body fat and thin condition of the applicant, the labor cost can be saved, and the application checking efficiency is improved.
In addition, the model can be trained by adopting a back propagation algorithm in the training process of the model, and all weights in the convolutional neural network are continuously and reversely adjusted, so that the recognition accuracy of the model is improved.
In addition, the powerful feature extraction capability of the convolutional neural network can be utilized to extract fat and thin features in the face image to be detected, and the accuracy of the final classification result can be ensured.
Example five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention. As shown in fig. 5, the fat-thin recognition device based on the face image may include: a marking unit 501, a training unit 502, an acquisition unit 503, a denoising unit 504, and an identification unit 505, wherein,
And the marking unit 501 is used for marking the fat and thin type of each acquired face image sample.
The training unit 502 is configured to form a training set from a plurality of face image samples of marked fat-thin type; and training the convolutional neural network by using the training set to obtain a classification model.
An acquiring unit 503, configured to acquire a face image of the applicant.
And the denoising unit 504 is used for denoising the face image to obtain the face image to be measured.
The recognition unit 505 is configured to input the face image to be detected into the classification model, obtain a classification result of the face image to be detected, and the classification result is used for describing the fat-lean type of the applicant.
As an optional implementation manner, the fat-thin recognition device based on a face image shown in fig. 5 may further include a network construction unit, not shown, configured to construct a convolutional neural network before the training unit 502 trains the convolutional neural network with the training set to obtain the classification model, where the convolutional neural network may sequentially include an input layer, a plurality of convolutional layers, an excitation layer, at least one fully-connected layer, and an output layer.
Accordingly, the training unit 502 is configured to train the convolutional neural network by using the training set to obtain the classification model specifically may be:
The training unit 502 is configured to train the convolutional neural network by using a training set, and expand the fat-thin characteristic differences between the face image samples with different types of fat-thin type labels; and reducing fat-lean feature variability between multiple face image samples having fat-lean type tags of the same category.
By implementing the embodiment, the generalization capability and the recognition accuracy of the classification model can be improved based on the strong computing capability of the convolutional neural network.
As an optional implementation manner, the marking unit 501 may also perform preprocessing on each collected face image before performing fat-lean type marking on each collected face image sample.
Further optionally, the manner in which the marking unit 501 performs preprocessing on the collected face images may specifically be: the marking unit 501 scales each face image acquired into a face image of a preset size, and converts the face image into a gray scale.
The preset size may be preset by a developer according to actual situations or requirements of users, for example, may be any size between 48×48 pixels and 256×256 pixels, and is preferably a size of 90×90 pixels.
By implementing this embodiment, the data amount can be increased, and the model overfitting can be prevented.
Implementing the device shown in fig. 5, by marking the fat-thin type of each acquired face image sample; a training set is formed by a plurality of face image samples with marked fat and thin types; training the convolutional neural network by using a training set to obtain a classification model; acquiring a face image of an applicant; denoising the face image to obtain a face image to be detected; and inputting the face image to be tested into a classification model to obtain a classification result of the face image to be tested, wherein the classification result is used for describing the fat and thin type of the applicant. Therefore, the classification model is constructed, the fat and thin type of the applicant can be automatically identified based on the face image of the applicant, the verification of the fat and thin condition of the applicant by the security applicant can be assisted, the labor cost can be saved, and the security application verification efficiency can be improved.
Example six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention. The fat-lean recognition device based on the face image shown in fig. 6 is obtained by optimizing the fat-lean recognition device based on the face image shown in fig. 5. Compared with the fat-thin recognition device based on the face image shown in fig. 5, the fat-thin recognition device based on the face image shown in fig. 6 may further include: a judging unit 506 and a prompting unit 507, wherein,
And a judging unit 506, configured to judge whether the classification result meets a preset target application condition that matches with the fat and thin condition of the body.
The prompting unit 507 is configured to output first prompting information when the determining unit 506 determines that the classification result meets the target application condition, where the first prompting information is used to describe that the body fat and thin condition of the applicant is checked.
The prompting unit 507 is further configured to output a second prompting message when the determining unit 506 determines that the classification result does not meet the target application condition, where the second prompting message is used to describe that the body fat and thin condition of the applicant is not checked.
As an alternative embodiment, the fat-lean type may be specifically over-fat, normal fat-lean or over-lean. Then, the manner in which the above-mentioned determining unit 506 is used to determine whether the classification result meets the preset target application condition matching the fat-lean condition of the body may specifically be:
the above-mentioned judging unit 506 is configured to judge whether the classification result is used to describe that the applicant is normal fat or thin; when the classification result is judged to be used for describing normal fat and thin of the applicant, judging that the classification result meets a preset target application condition matched with the fat and thin condition of the applicant; and when the classification result is judged not to be used for describing normal fat or thin of the applicant, judging that the classification result does not meet the preset target application condition matched with the fat or thin condition of the body.
By implementing the embodiment, whether the classification result meets the preset target application condition matched with the body fat-lean condition or not is judged, whether the fat-lean type of the applicant meets the target application condition or not can be automatically identified, and whether the applicant passes the audit is prompted, so that the application audit efficiency is improved.
As an optional implementation manner, before training the convolutional neural network by using the training set to obtain the classification model, the training unit 502 further determines whether there are rare samples in the training set; and when judging that the sparse samples exist, increasing the number of samples of the category corresponding to the sparse samples to obtain a uniform training set.
Further, the training unit 502 is configured to train the convolutional neural network by using the training set to obtain the classification model specifically in the following manner: the training unit 502 is configured to train the convolutional neural network by using the uniform training set, and obtain a classification model.
According to the embodiment, sample data of the training set are homogenized, and the classification model is obtained by training the homogenized training set, so that the recognition accuracy of the classification model can be improved.
Therefore, the device shown in fig. 6 can assist the underwriter to audit the fat and thin body condition of the applicant, can save labor cost and improve the applicant audit efficiency.
In addition, when the sparse samples exist in the training set, the number of the sparse samples is increased according to the types of the sparse samples, so that the distribution of the sparse sample set and the normal sample set in the training set is uniform, and the uniform training set after uniform training is utilized to train and obtain the classification model, so that the recognition accuracy of the classification model can be improved.
In addition, whether the classification result meets the preset target application condition matched with the body fat-lean condition or not can be judged, namely whether the fat-lean type of the applicant meets the target application condition or not is automatically recognized, and whether the examination is passed or not is prompted, so that the application examination efficiency is improved.
Example seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another fat-thin recognition device based on a face image according to an embodiment of the present invention. The fat-lean recognition device based on the face image shown in fig. 7 is obtained by optimizing the fat-lean recognition device based on the face image shown in fig. 6. In comparison with the fat-thin recognition device based on a face image shown in fig. 6, in the fat-thin recognition device based on a face image shown in fig. 7, the classification model includes a convolution layer, an excitation layer, and a full connection layer, and the recognition unit 505 may include: a convolution subunit 5051, an excitation subunit 5052, and an identification subunit 5053, wherein,
The convolution subunit 5051 is configured to input the face image to be detected into a convolution layer, and perform feature extraction on the face image to be detected by using a plurality of convolution cores in the convolution layer to obtain a feature image.
The excitation subunit 5052 is configured to input the feature image into an excitation layer, and perform nonlinear spatial mapping on the feature image in the excitation layer to obtain a feature vector.
The recognition subunit 5053 is configured to input the feature vector into the full-connection layer, and perform feature integration on the feature vector in the full-connection layer to obtain a classification result of the face image to be detected.
As an optional implementation manner, the training unit 502 is configured to train the convolutional neural network by using the training set to obtain the classification model specifically may be:
The training unit 502 is configured to input the face image samples in the training set into a convolutional neural network, and output an actual output result of the face image samples; obtaining an ideal output result according to the fat-thin type corresponding to the face image sample; calculating the difference between the actual output result and the ideal output result, and carrying out back propagation according to the difference so as to correct the parameter weight of the convolutional neural network; repeating the steps until the difference value meets the preset condition, and obtaining the classification model.
Further, as an optional implementation manner, the training unit 502 is configured to input the face image samples in the training set into the convolutional neural network, and the manner of outputting the actual output result of the face image samples may specifically be: the training unit 502 is configured to send the face image samples with the known fat-thin type marks in the training set to a convolutional neural network, and perform convolutional calculation by using a plurality of convolutional check face image samples in the convolutional neural network to obtain feature image samples; and performing feature mapping on the feature image sample to obtain an actual output result of the face image sample.
By implementing the embodiment, the model is trained by adopting a back propagation algorithm in the training process of the model, and all weights in the convolutional neural network are continuously and reversely adjusted, so that the recognition accuracy of the model can be improved.
As an optional implementation manner, the mode of the denoising unit 504 for denoising the face image to obtain the face image to be measured may specifically be:
A denoising unit 504, configured to acquire an image signal to be processed of a face image; performing low-pass filtering processing on the image signal to be processed to obtain a low-frequency image signal; calculating the difference value between the image signal to be processed and the low-frequency image signal to obtain a high-frequency image signal; carrying out denoising filtering treatment on the low-frequency image signal and the high-frequency image signal by adopting a non-local mean value method to obtain a denoising low-frequency image signal and a denoising high-frequency image signal; summing the denoising low-frequency image signal and the denoising high-frequency image signal to obtain a denoising image signal; and obtaining the face image to be detected according to the denoising image signal.
According to the embodiment, the face image is divided into the low-frequency image signal and the high-frequency image signal and then is subjected to denoising filtering treatment respectively, so that the denoising treatment process is simple, and the application auditing efficiency is improved.
Therefore, the device shown in fig. 7 can assist the underwriter to audit the fat and thin body condition of the applicant, so that the labor cost can be saved, and the underwriting audit efficiency can be improved.
In addition, the face image can be divided into a low-frequency image signal and a high-frequency image signal and then is subjected to denoising filtering treatment respectively, so that the denoising treatment process is simple, and the application auditing efficiency is improved; and the model can be trained by adopting a back propagation algorithm in the training process of the model, and all weights in the convolutional neural network are continuously and reversely adjusted, so that the recognition accuracy of the model is improved.
In addition, the powerful feature extraction capability of the convolutional neural network can be utilized to extract fat and thin features in the face image to be detected, and the accuracy of the final classification result can be ensured.
The invention also provides an electronic device, comprising:
A processor;
And a memory having stored thereon computer readable instructions which, when executed by the processor, implement the facial image based fat-thin recognition method as previously described.
The electronic device may be the apparatus 100 shown in fig. 1.
In an exemplary embodiment, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a face image-based fat-thin recognition method as previously shown.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (7)

1. The fat-thin recognition method based on the face image is characterized by comprising the following steps of:
marking the fat-thin type of each acquired face image sample, wherein the fat-thin type is over-fat, normal fat-thin or over-thin;
a training set is formed by a plurality of face image samples with marked fat and thin types;
training the convolutional neural network by using the training set to obtain a classification model; acquiring a face image of an applicant;
Denoising the face image to obtain a face image to be detected;
Inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, wherein the classification result is used for describing the fat-thin type of the applicant;
judging whether the classification result is used for describing the normal fat and thin of the applicant;
if the classification result is used for describing that the applicant is normal fat and thin, judging that the classification result meets a preset target application condition matched with the body fat and thin condition;
If the classification result is not used for describing that the applicant is normal fat or thin, judging that the classification result does not meet a preset target application condition matched with the fat or thin condition of the body;
if the target application condition is met, outputting first prompt information, wherein the first prompt information is used for describing that the body fat and thin condition of the applicant is checked and passed;
If the target application condition is not met, outputting second prompt information, wherein the second prompt information is used for describing that the body fat and thin condition of the applicant is checked and not passed;
the training the convolutional neural network by using the training set to obtain a classification model comprises the following steps:
Inputting the face image sample in the training set into a convolutional neural network, and outputting an actual output result of the face image sample;
Obtaining an ideal output result according to the fat-thin type corresponding to the face image sample;
calculating a difference value between the actual output result and the ideal output result;
and carrying out back propagation according to the difference value to correct the parameter weight of the convolutional neural network until the difference value meets a preset condition, and obtaining a classification model.
2. The method of claim 1, wherein the training of the convolutional neural network using the training set further comprises, prior to obtaining the classification model:
judging whether rare samples exist in the training set or not;
If the sparse samples exist, increasing the number of the samples of the category corresponding to the sparse samples to obtain a uniform training set;
And training the convolutional neural network by using the training set to obtain a classification model, wherein the training set comprises the following steps: and training the convolutional neural network by using the uniform training set to obtain a classification model.
3. The method of claim 1, wherein the classification model comprises a convolution layer, an excitation layer, and a full connection layer; inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, wherein the method comprises the following steps:
inputting the face image to be detected into the convolution layer, and adopting a plurality of convolution cores in the convolution layer to extract the characteristics of the face image to be detected so as to obtain a characteristic image;
inputting the characteristic image into the excitation layer, and carrying out nonlinear space mapping on the characteristic image in the excitation layer to obtain a characteristic vector;
And inputting the feature vector into the full-connection layer, and carrying out feature integration on the feature vector in the full-connection layer to obtain a classification result of the face image to be detected.
4. A method according to any one of claims 1 to 3, wherein denoising the face image to obtain a face image to be measured comprises:
Acquiring an image signal to be processed of the face image;
performing low-pass filtering processing on the image signal to be processed to obtain a low-frequency image signal;
Calculating the difference value between the image signal to be processed and the low-frequency image signal to obtain a high-frequency image signal;
Carrying out denoising filtering treatment on the low-frequency image signal and the high-frequency image signal by adopting a non-local mean value method to obtain a denoising low-frequency image signal and a denoising high-frequency image signal;
summing the denoising low-frequency image signal and the denoising high-frequency image signal to obtain a denoising image signal;
And obtaining a face image to be detected according to the denoising image signal.
5. A facial image-based fat-lean recognition device, the device comprising:
The marking unit is used for marking the fat-thin type of each acquired face image sample, wherein the fat-thin type is over-fat, normal fat-thin or over-thin;
The training unit is used for forming a training set by a plurality of face image samples of marked fat-thin type; training the convolutional neural network by using the training set to obtain a classification model;
The acquisition unit is used for acquiring the face image of the applicant;
the denoising unit is used for denoising the face image to obtain a face image to be detected;
the identification unit is used for inputting the face image to be detected into the classification model to obtain a classification result of the face image to be detected, wherein the classification result is used for describing the fat-thin type of the applicant, judging whether the classification result is used for describing the normal fat-thin of the applicant, if the classification result is used for describing the normal fat-thin of the applicant, judging that the classification result meets a preset target application condition matched with the body fat-thin condition, if the classification result is not used for describing the normal fat-thin of the applicant, judging that the classification result does not meet the preset target application condition matched with the body fat-thin condition, if the classification result meets the target application condition, outputting first prompt information, wherein the first prompt information is used for describing that the body fat-thin condition of the applicant is checked and passed, and if the classification result is not used for describing that the body fat-thin condition of the applicant is checked and not passed;
the training the convolutional neural network by using the training set to obtain a classification model comprises the following steps:
Inputting the face image sample in the training set into a convolutional neural network, and outputting an actual output result of the face image sample;
Obtaining an ideal output result according to the fat-thin type corresponding to the face image sample;
calculating a difference value between the actual output result and the ideal output result;
and carrying out back propagation according to the difference value to correct the parameter weight of the convolutional neural network until the difference value meets a preset condition, and obtaining a classification model.
6. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the facial image-based fat-thin recognition method according to any one of claims 1-4.
7. A computer-readable storage medium storing a computer program for causing a computer to execute the face image-based fat-thin recognition method according to any one of claims 1 to 4.
CN201910636941.7A 2019-07-15 2019-07-15 Fat-lean recognition method and device based on face image and electronic equipment Active CN110472509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910636941.7A CN110472509B (en) 2019-07-15 2019-07-15 Fat-lean recognition method and device based on face image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910636941.7A CN110472509B (en) 2019-07-15 2019-07-15 Fat-lean recognition method and device based on face image and electronic equipment

Publications (2)

Publication Number Publication Date
CN110472509A CN110472509A (en) 2019-11-19
CN110472509B true CN110472509B (en) 2024-04-26

Family

ID=68508659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910636941.7A Active CN110472509B (en) 2019-07-15 2019-07-15 Fat-lean recognition method and device based on face image and electronic equipment

Country Status (1)

Country Link
CN (1) CN110472509B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449312A (en) * 2020-11-04 2022-05-06 深圳Tcl新技术有限公司 Video playing control method and device, terminal equipment and storage medium
CN116342968B (en) * 2023-01-18 2024-03-19 北京六律科技有限责任公司 Dual-channel face recognition method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN106682734A (en) * 2016-12-30 2017-05-17 中国科学院深圳先进技术研究院 Method and apparatus for increasing generalization capability of convolutional neural network
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN108875590A (en) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 BMI prediction technique, device, computer equipment and storage medium
CN109376674A (en) * 2018-10-31 2019-02-22 北京小米移动软件有限公司 Method for detecting human face, device and storage medium
CN109376717A (en) * 2018-12-14 2019-02-22 中科软科技股份有限公司 Personal identification method, device, electronic equipment and the storage medium of face comparison

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014068567A1 (en) * 2012-11-02 2014-05-08 Itzhak Wilf Method and system for predicting personality traits, capabilities and suggested interactions from images of a person

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN106682734A (en) * 2016-12-30 2017-05-17 中国科学院深圳先进技术研究院 Method and apparatus for increasing generalization capability of convolutional neural network
CN108875590A (en) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 BMI prediction technique, device, computer equipment and storage medium
CN109376674A (en) * 2018-10-31 2019-02-22 北京小米移动软件有限公司 Method for detecting human face, device and storage medium
CN109376717A (en) * 2018-12-14 2019-02-22 中科软科技股份有限公司 Personal identification method, device, electronic equipment and the storage medium of face comparison

Also Published As

Publication number Publication date
CN110472509A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110807495B (en) Multi-label classification method, device, electronic equipment and storage medium
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
WO2019085329A1 (en) Recurrent neural network-based personal character analysis method, device, and storage medium
CN108399665A (en) Method for safety monitoring, device based on recognition of face and storage medium
CN105654952A (en) Electronic device, server, and method for outputting voice
CN107251033A (en) System and method for carrying out active user checking in online education
CN107209855A (en) Pass through fingerprint recognition certification user
CN105631406B (en) Image recognition processing method and device
CN108388878A (en) The method and apparatus of face for identification
CN110472509B (en) Fat-lean recognition method and device based on face image and electronic equipment
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN104346503A (en) Human face image based emotional health monitoring method and mobile phone
CN110991249A (en) Face detection method, face detection device, electronic equipment and medium
CN107633164A (en) Pay control method, device, computer installation and computer-readable recording medium
WO2019109530A1 (en) Emotion identification method, device, and a storage medium
CN110610125A (en) Ox face identification method, device, equipment and storage medium based on neural network
CN109522858A (en) Plant disease detection method, device and terminal device
CN112329586A (en) Client return visit method and device based on emotion recognition and computer equipment
CN110276405B (en) Method and apparatus for outputting information
CN111784665A (en) OCT image quality assessment method, system and device based on Fourier transform
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
WO2021047376A1 (en) Data processing method, data processing apparatus and related devices
CN116860964A (en) User portrait analysis method, device and server based on medical management label
CN116313127A (en) Decision support system based on pre-hospital first-aid big data
TWM586599U (en) System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant