CN112395979A - Image-based health state identification method, device, equipment and storage medium - Google Patents

Image-based health state identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112395979A
CN112395979A CN202011286362.3A CN202011286362A CN112395979A CN 112395979 A CN112395979 A CN 112395979A CN 202011286362 A CN202011286362 A CN 202011286362A CN 112395979 A CN112395979 A CN 112395979A
Authority
CN
China
Prior art keywords
recognition model
image
training
model
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011286362.3A
Other languages
Chinese (zh)
Other versions
CN112395979B (en
Inventor
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011286362.3A priority Critical patent/CN112395979B/en
Publication of CN112395979A publication Critical patent/CN112395979A/en
Priority to PCT/CN2021/090417 priority patent/WO2022105118A1/en
Application granted granted Critical
Publication of CN112395979B publication Critical patent/CN112395979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for recognizing a health state based on an image, belonging to the field of artificial intelligence, wherein the method comprises the steps of training an initial feature recognition model through a first training sample set to obtain a face feature recognition model; acquiring a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model; acquiring an image to be recognized, and recognizing the image to be recognized through a face feature recognition model to obtain the face feature of a user; and correcting the face characteristics of the image to be recognized based on the correction matrix, and recognizing the health state of the user according to the correction result. In addition, the application also relates to a block chain technology, and the image to be identified can be stored in the block chain. The face health state recognition method and device can remarkably improve face health state recognition accuracy, meanwhile, the adaptability is strong, and the face health state recognition device and device are convenient to deploy in a mobile terminal.

Description

Image-based health state identification method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a health state identification method and device based on images, computer equipment and a storage medium.
Background
With the wide use of artificial intelligence in the financial field, the scenes of developing financial behaviors at the mobile terminal are more and more common, and the development of financial behaviors at the mobile terminal generally relates to a set of stricter approval services, wherein the judgment and operation of judging whether a user is under the conditions of health and clear mind are the precondition of ensuring whether operation data sent to the user has legal significance.
Currently, the realization of the face health status recognition technology focuses on making the judgment whether a client is healthy or awake based on the physiological signals, the whole body images, the face images and the video influence of the client. For example, the body parameters of the user, such as blood pressure and the like, are obtained based on the portable device (such as a bracelet), and the health state of the user is comprehensively judged based on the body parameters of the user; however, the health state of the user is judged through the physical parameters, extra equipment is needed, the user experience is often poor, system integration is not facilitated, and the use condition is severe. The health state of the client is judged by shooting a whole body video image or image of the user and then analyzing the whole body video image or image. When the health state of a client is judged through images or images at present, the whole body video images or images with specific actions or postures of the user need to be shot, the experience of the client is often poor when the user is required to make the specific actions or postures, and the client cannot be suitable for mobile terminal services.
Disclosure of Invention
An embodiment of the application aims to provide a method and a device for identifying a health state based on an image, a computer device and a storage medium, so as to solve the technical problems that the existing face health state identification technology is poor in customer experience and cannot be adapted to mobile terminal services.
In order to solve the above technical problem, an embodiment of the present application provides an image-based health status identification method, which adopts the following technical solutions:
an image-based health status recognition method, comprising:
acquiring a first training sample set from a preset database, and training a preset initial feature recognition model through the first training sample set to obtain a face feature recognition model;
acquiring a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model;
acquiring an image to be recognized, and recognizing the image to be recognized through a face feature recognition model to obtain the face feature of a user in the image to be recognized;
and correcting the face characteristics of the image to be recognized based on the correction matrix, and recognizing the health state of the user according to the correction result.
Further, a first training sample set is obtained from a preset database, a preset initial feature recognition model is trained through the first training sample set, and a face feature recognition model is obtained, specifically including:
acquiring sample data from a preset database, and marking the sample data;
randomly combining the labeled sample data to obtain a first training sample set and a first verification data set;
importing the first training sample set into an initial feature recognition model for model training to obtain an initial face feature recognition model;
and verifying the initial face feature recognition model through the first verification data set, and outputting the face feature recognition model passing the verification.
Further, the steps of verifying the initial face feature recognition model through the first verification data set and outputting the face feature recognition model passing the verification specifically include:
importing the first verification data set into an initial human face feature recognition model, and outputting a recognition result;
based on the recognition result and the first preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a recognition error;
and comparing the identification error with a first preset error threshold, if the identification error is greater than the first preset error threshold, iteratively updating the initial face feature identification model until the identification error is less than or equal to the first preset error threshold, and acquiring the face feature identification model passing verification.
Further, the method comprises the steps of collecting a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model, and specifically comprises the following steps:
acquiring a training result of the face feature recognition model, and labeling the training result to obtain a health state label;
counting the number of the health state labels, and calculating the joint distribution probability of each health state label based on the number of the health state labels;
combining the health state labels based on the joint distribution probability to obtain a second training sample set and a second verification data set;
training a preset initial relationship recognition model according to a second training sample set to obtain a characteristic relationship recognition model;
and acquiring an output result of the characteristic relation recognition model, and generating a correction matrix based on the output result.
Further, training a preset initial relationship recognition model according to a second training sample set to obtain a characteristic relationship recognition model, specifically comprising:
importing a second training sample set into a preset initial relationship recognition model for training to obtain an initial characteristic relationship recognition model;
verifying the initial characteristic relation recognition model through a second verification data set, and outputting a prediction result;
based on the prediction result and a second preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a prediction error;
and comparing the prediction error with a second preset error threshold, if the prediction error is larger than the second preset error threshold, iteratively updating the initial characteristic relationship identification model until the prediction error is smaller than or equal to the second preset error threshold, and acquiring the characteristic relationship identification model passing the verification.
Further, iteratively updating the initial feature relationship identification model until the prediction error is less than or equal to a second preset error threshold, and obtaining the feature relationship identification model passing the verification, specifically including:
setting a loss function of the initial characteristic relation recognition model;
iteratively updating the initial characteristic relation recognition model based on the loss function until the output of the loss function reaches the minimum value;
and acquiring a characteristic relation identification model of which the output of the loss function reaches the minimum value.
Further, the step of obtaining an image to be recognized and recognizing the image to be recognized through the face feature recognition model to obtain the face feature of the user in the image to be recognized specifically includes:
acquiring an image to be recognized, scanning the image to be recognized, and recognizing a face area in the image to be recognized;
and carrying out feature recognition on the face region in the image to be recognized through the face feature recognition model to obtain the face features of the user in the image to be recognized.
In order to solve the above technical problem, an embodiment of the present application further provides an image-based health status identification apparatus, which adopts the following technical solutions:
an image-based health status recognition apparatus comprising:
the first model training module is used for acquiring a first training sample set from a preset database, and training a preset initial feature recognition model through the first training sample set to obtain a face feature recognition model;
the second model training module is used for acquiring a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model;
the face feature recognition module is used for acquiring an image to be recognized and recognizing the image to be recognized through the face feature recognition model to obtain the face feature of the user in the image to be recognized;
and the health state identification module is used for correcting the face characteristics of the image to be identified based on the correction matrix and identifying the health state of the user according to the correction result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory having computer readable instructions stored therein which when executed by the processor implement the steps of the image based health status recognition method according to any one of the preceding claims.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the image based health status identification method according to any one of the preceding claims.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application discloses a health state identification method, a health state identification device, health state identification equipment and a storage medium based on images, which belong to the field of artificial intelligence. When the health state of the face is identified, extracting features through a pre-constructed face feature identification model to obtain the face features of the user, then correcting the face feature parameters of the user through a correction matrix, and finally identifying the health state of the face according to a correction result. The face health state recognition accuracy is improved, meanwhile, a large amount of computing resources and system resources are not occupied when the face characteristic parameters of the user are corrected through the correction matrix, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the face health state recognition method is convenient to deploy in a mobile terminal.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 illustrates a flow diagram of one embodiment of an image-based health status identification method in accordance with the present application;
FIG. 3 is a flow chart of one embodiment of step S201 of FIG. 2;
FIG. 4 is a flow diagram illustrating one embodiment of step S202 of FIG. 2;
FIG. 5 illustrates a schematic structural diagram of one embodiment of an image-based health status identification apparatus in accordance with the present application;
FIG. 6 shows a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the image-based health status identification method provided in the embodiment of the present application is generally executed by a server/terminal device, and accordingly, the image-based health status identification apparatus is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continuing reference to FIG. 2, a flow diagram of one embodiment of a method of image-based health status identification in accordance with the present application is shown. The image-based health state identification method comprises the following steps:
s201, a first training sample set is obtained from a preset database, and a preset initial feature recognition model is trained through the first training sample set to obtain a face feature recognition model.
With the wide use of artificial intelligence in the financial field, the scenes of developing financial behaviors at the mobile terminal are more and more common, and the development of financial behaviors at the mobile terminal generally relates to a set of stricter approval services, wherein the judgment and operation of judging whether a user is under the conditions of health and clear mind are the precondition of ensuring whether operation data sent to the user has legal significance. The important factor for determining whether the user operation has legal significance lies in determining whether the client is conscious when doing the operation, and in an actual application scene, according to specific analysis of a service scene, the reasons for causing the client to be unconscious mainly include the following three conditions: 1. drunkenness; 2. extreme lack of rest; 3. some acute and chronic diseases. These reasons reflect the corresponding features on the face of the user, which can be summarized as follows: eye conditions (e.g., edema, depression, etc.), cheek conditions (e.g., edema, depression, etc.), skin conditions (e.g., alcohol intoxication, redness, skin dryness due to lack of rest).
Specifically, sample data is obtained from a preset database, the sample data is face images of different users in the application, the sample data is labeled based on the face features, the labeled sample data is combined to obtain a first training sample set, and a preset initial feature recognition model is trained through the first training sample set to obtain a face feature recognition model. The preset initial feature recognition model is a deep Convolutional Neural network model, such as a CNN model, and a Convolutional Neural Network (CNN) is a kind of feed forward Neural network (fed Neural network) that includes convolution calculation and has a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional neural networks have a feature learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure thereof, and are also called "shift-invariant artificial neural networks". The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has stable effect and no additional characteristic engineering requirement on data, and the convolutional kernel parameter sharing in a convolutional layer and the sparsity of interlayer connection enable the convolutional neural network to learn grid-like topology (pixels and audio) features with small calculation amount.
S202, collecting a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model.
Specifically, a training result output during training of the face feature recognition model is collected, the training result of the face feature recognition model is labeled, the training results of the labeled face feature recognition models are combined to obtain a second training sample set, an initial relationship recognition model is trained through the second training sample set to obtain a trained feature relationship recognition model, the trained feature relationship recognition model is verified until the model is fitted to obtain an output result of the feature relationship recognition model, and a correction matrix is generated based on the output result of the feature relationship recognition model, wherein the output result of the feature relationship recognition model is in a matrix form.
Among them, the initial relationship recognition model is a Graph convolution neural Network model, such as a GCN model, and a Graph Convolution Network (GCN) is a neural Network structure that has become popular in recent years. Unlike traditional network models LSTM and CNN that can only be used for grid-based data, graph-volume networks can process data with a generalized topological graph structure and deeply explore its features and laws, such as a series of irregular data with a spatial topological graph structure like PageRank reference network, social network, communication network, protein molecular structure, etc. The GCN subtly designs a method for extracting characteristics from graph data, can simultaneously carry out end-to-end learning on node characteristic information and structural information, is the best selection of a current graph data learning task, has extremely wide graph convolution applicability, and is suitable for nodes and graphs with any topological structures.
S203, acquiring an image to be recognized, and recognizing the image to be recognized through the human face feature recognition model to obtain the facial features of the user in the image to be recognized.
Specifically, when a health state recognition instruction is received, an image to be recognized of the user is obtained, and the image to be recognized is recognized through a trained face feature recognition model, so that the face feature of the user is obtained. It should be noted that, by performing feature recognition on the image to be recognized through the face feature recognition model, a plurality of face features of the user in the image to be recognized can be obtained.
In this embodiment, the electronic device (for example, the server/terminal device shown in fig. 1) on which the image-based health status identification method operates may receive the health status identification instruction through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
And S204, correcting the face characteristics of the image to be recognized based on the correction matrix, and recognizing the health state of the user according to the correction result.
Specifically, a plurality of facial features output by the facial feature recognition model are obtained, wherein the facial features output by the facial feature recognition model are in a matrix form, and the correction matrix is used for performing inner product operation with the output result of the facial feature recognition model respectively so as to finish correction of the plurality of facial features of the image to be recognized, wherein the weight matrix can be regarded as a tool for correcting the facial features. And then carrying out sigmoid processing on the correction result, outputting a sigmoid processing result, wherein the sigmoid processing result is a specific numerical value, and judging the health state of the user according to the sigmoid processing result.
In the specific embodiment of the application, a CNN deep learning convolution network model and a GCN graph convolution network model are combined for model training to obtain a face feature recognition model and a feature relationship recognition model, and a correction matrix of the face feature recognition model is generated based on the feature relationship recognition model. The trained face feature recognition model and the correction matrix can be deployed to the mobile terminal, and when the health state is recognized, the correction matrix is used for correcting the face feature parameters of the user, so that a large amount of computing resources and system resources are not occupied, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the mobile terminal can be conveniently deployed in the mobile terminal.
The application discloses a health state identification method, a health state identification device, health state identification equipment and a storage medium based on images, which belong to the field of artificial intelligence. When the health state of the face is identified, extracting features through a pre-constructed face feature identification model to obtain the face features of the user, then correcting the face feature parameters of the user through a correction matrix, and finally identifying the health state of the face according to a correction result. The face health state recognition accuracy is improved, meanwhile, a large amount of computing resources and system resources are not occupied when the face characteristic parameters of the user are corrected through the correction matrix, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the face health state recognition method is convenient to deploy in a mobile terminal.
Further, referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of step S201 in fig. 2, where the step of obtaining a first training sample set from a preset database, training an initial feature recognition model through the first training sample set, and obtaining a face feature recognition model specifically includes:
s301, obtaining sample data from a preset database, and labeling the sample data;
s302, randomly combining the labeled sample data to obtain a first training sample set and a first verification data set;
s303, importing the first training sample set into an initial feature recognition model for model training to obtain an initial human face feature recognition model;
s304, the initial face feature recognition model is verified through the first verification data set, and the face feature recognition model passing verification is output.
Specifically, sample data is obtained from a preset database, the sample data is face images of different users in the application, the sample data is labeled based on facial features of the sample data, the labeled sample data is randomly combined to obtain a first training sample set and a first verification data set, for example, the labeled sample data can be randomly divided into 10 equal parts of sample subsets, wherein the 9 sample subsets are randomly combined to serve as the first training sample set, and the remaining sample subsets serve as the first verification data set. And importing the first training sample set into an initial feature recognition model for model training to obtain an initial face feature recognition model, verifying the initial face feature recognition model through a first verification data set, and outputting the face feature recognition model passing verification. In the above embodiment, the face feature recognition model can be quickly obtained by constructing the first training sample set and the first verification data set, and training and verifying the initial feature recognition model through the first training sample set and the first verification data set, respectively.
Further, the steps of verifying the initial face feature recognition model through the first verification data set and outputting the face feature recognition model passing the verification specifically include:
importing the first verification data set into an initial human face feature recognition model, and outputting a recognition result;
based on the recognition result and the first preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a recognition error;
and comparing the identification error with a first preset error threshold, if the identification error is greater than the first preset error threshold, iteratively updating the initial face feature identification model until the identification error is less than or equal to the first preset error threshold, and acquiring the face feature identification model passing verification.
The back propagation algorithm, namely a back propagation algorithm (BP algorithm), is a learning algorithm suitable for a multi-layer neuron network, and is established on the basis of a gradient descent method and used for error calculation of a deep learning network. The input and output relationship of the BP network is essentially a mapping relationship: an n-input m-output BP neural network performs the function of continuous mapping from n-dimensional euclidean space to a finite field in m-dimensional euclidean space, which is highly non-linear. The learning process of the BP algorithm consists of a forward propagation process and a backward propagation process. In the forward propagation process, input information passes through the hidden layer through the input layer, is processed layer by layer and is transmitted to the output layer, the backward propagation is converted, the partial derivatives of the target function to the weight of each neuron are calculated layer by layer, and the gradient of the target function to the weight vector is formed to be used as the basis for modifying the weight.
Specifically, a first verification data set is imported into an initial human face feature recognition model, and a recognition result is output; based on the recognition result and the first preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a recognition error; and comparing the recognition error with a first preset error threshold, if the recognition error is greater than the first preset error threshold, iteratively updating the initial face feature recognition model based on a loss function of the initial face feature recognition model until the recognition error is less than or equal to the first preset error threshold, and acquiring the face feature recognition model passing verification. The first preset standard result and the first preset error threshold value can be set in advance. In the embodiment, the face feature recognition model is verified and iterated through a back propagation algorithm, so that the face feature recognition model meeting the requirements is obtained.
Further, referring to fig. 4, fig. 4 is a flowchart illustrating a specific implementation manner of step S202 in fig. 2, where the step of acquiring a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model specifically includes:
s401, collecting a training result of the face feature recognition model, and labeling the training result to obtain a health state label;
s402, counting the number of the health state labels, and calculating the joint distribution probability of each health state label based on the number of the health state labels;
s403, combining the health state labels based on the joint distribution probability to obtain a second training sample set and a second verification data set;
s404, training a preset initial relationship recognition model according to a second training sample set to obtain a characteristic relationship recognition model;
s405, obtaining an output result of the characteristic relation recognition model, and generating a correction matrix based on the output result.
In the embodiment, the label of the health state of the face, such as face blush-drunk, eye edema-lack rest, black eye-lack rest, and the like, is obtained. In the specific embodiment of the present application, the weight value of each health status label on the health status of the human face is represented by the joint distribution probability.
Specifically, the first training sample set is led into a face feature recognition model for training, each training result of the face feature recognition model is collected, the obtained training results are labeled to obtain a health state label, if a certain training result shows that the face of a user is in a red halo state, the label is drunk, and if a certain training result shows that the eyes of the user are in a black eye, the label is lack of rest. Counting the number of all the obtained health state labels, calculating the joint distribution probability of each health state label based on the number of the health state labels, combining the health state labels based on the joint distribution probability to obtain a second training sample set and a second verification data set, training a preset initial relationship recognition model according to the second training sample set to obtain a characteristic relationship recognition model, obtaining an output result of the characteristic relationship recognition model, and generating a correction matrix based on the output result. In the above embodiment, the training results of the face feature recognition model are labeled and combined to obtain two training sample sets and a second verification data set, the initial relationship recognition model is trained through the two training sample sets and the second verification data set to obtain the feature relationship recognition model, and the correction matrix is generated based on the output result of the feature relationship recognition model, so that the feature relationship recognition model and the correction matrix can be quickly obtained.
In a specific embodiment of the present application, before the step of combining the health status labels based on the joint distribution probability to obtain the second training sample set and the second verification data set, the method further includes:
the health status label is vectorized by the work2vec tool.
The word2vec is a tool for word vector calculation, the word2vec can be trained on a million-order dictionary and a billion data sets with high efficiency, and the word vector (word embedding) which is a training result obtained by the word2vec can well measure the similarity between words. Behind the word2vec algorithm is a shallow neural network. It is also emphasized that word2vec is an open source tool for computing word vector.
Specifically, vectorizing each health state label through a work2vec tool to obtain a feature vector of each health state label, randomly combining the feature vectors according to joint distribution probability to obtain a second training sample set and a second verification data set, wherein the second training sample set and the second verification data set are both vector-form data sets, and inputting the vector-form data sets into a feature relation recognition model to obtain a matrix-form output result so as to obtain a correction matrix.
Further, training a preset initial relationship recognition model according to a second training sample set to obtain a characteristic relationship recognition model, specifically comprising:
importing a second training sample set into a preset initial relationship recognition model for training to obtain an initial characteristic relationship recognition model;
verifying the initial characteristic relation recognition model through a second verification data set, and outputting a prediction result;
based on the prediction result and a second preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a prediction error;
and comparing the prediction error with a second preset error threshold, if the prediction error is larger than the second preset error threshold, iteratively updating the initial characteristic relationship identification model until the prediction error is smaller than or equal to the second preset error threshold, and acquiring the characteristic relationship identification model passing the verification.
Specifically, a second training sample set is led into the initial relationship recognition model for training to obtain an initial characteristic relationship recognition model, a second verification data set is led into the initial characteristic relationship recognition model for verification, a prediction result is output, fitting calculation is carried out by using a back propagation algorithm based on the prediction result and a second preset standard result, a prediction error is obtained, the prediction error is compared with a second preset error threshold, if the prediction error is larger than the second preset error threshold, iterative updating is carried out on the initial characteristic relationship recognition model based on a loss function of the initial characteristic relationship recognition model until the prediction error is smaller than or equal to the second preset error threshold, and the verified characteristic relationship recognition model is obtained. The second preset standard result and the second preset error threshold value can be set in advance. In the above embodiment, the feature relationship identification model is verified and iterated through a back propagation algorithm, so as to obtain a feature relationship identification model meeting the requirements.
Further, iteratively updating the initial feature relationship identification model until the prediction error is less than or equal to a second preset error threshold, and obtaining the feature relationship identification model passing the verification, specifically including:
setting a loss function of the initial characteristic relation recognition model;
iteratively updating the initial characteristic relation recognition model based on the loss function until the output of the loss function reaches the minimum value;
and acquiring a characteristic relation identification model of which the output of the loss function reaches the minimum value.
Specifically, a loss function of the initial feature relationship identification model is constructed, parameter optimization is performed on the feature relationship identification model based on the constructed loss function until the loss function of the feature relationship identification model reaches a minimum value, and a loss function L of the feature relationship identification model is specifically as follows:
Figure BDA0002782537080000141
where c refers to the tag number, y represents the true tag value,
Figure BDA0002782537080000142
represents the prediction output, and σ represents the sigmoid function.
In the above embodiment, the iterative update of the initial feature relationship recognition model is realized by constructing the loss function of the initial feature relationship recognition model and performing parameter optimization on the feature relationship recognition model based on the constructed loss function.
Further, the step of obtaining an image to be recognized and recognizing the image to be recognized through the face feature recognition model to obtain the face feature of the user in the image to be recognized specifically includes:
acquiring an image to be recognized, scanning the image to be recognized, and recognizing a face area in the image to be recognized;
and carrying out feature recognition on the face region in the image to be recognized through the face feature recognition model to obtain the face features of the user in the image to be recognized.
Specifically, when the human face health state is recognized, an image to be recognized is obtained, the image to be recognized is input into a trained human face feature recognition model, and the facial features of the user in the image to be recognized are obtained. The method comprises the steps of identifying a face area in an image to be identified through an SSD model.
In a specific embodiment of the present application, after an image to be recognized is acquired, the image to be recognized is normalized, and a pre-trained SSD (Single Shot multi box Detector) model is used to extract a face region. The SSD model can extract coordinates of upper left and lower right points of a face frame, a face area is intercepted from an original image according to the extracted coordinates and is zoomed to 448x448, and the zoomed face area is input into the face feature recognition model to be recognized, so that the face feature of a user is obtained.
In the specific embodiment of the application, a face region of an image to be detected is intercepted through a pre-constructed SSD model, and then face features of a user are extracted by using a pre-constructed face feature recognition model, so that a face feature tensor is obtained. And then, correcting the face feature tensor of the user by using a pre-constructed correction matrix, and finally identifying the health state of the user based on a correction result. In the method and the device, the face characteristics are corrected through a simple correction matrix, the accuracy of health state recognition is improved, the adaptability is high, and a large amount of computing resources cannot be occupied in the recognition process.
It should be emphasized that, in order to further ensure the privacy and security of the image to be recognized, the image to be recognized may also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 5, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an image-based health status recognition apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the image-based health status recognition apparatus according to the present embodiment includes:
the first model training module 501 is configured to obtain a first training sample set from a preset database, train a preset initial feature recognition model through the first training sample set, and obtain a face feature recognition model;
the second model training module 502 is configured to acquire a training result of the face feature recognition model, train the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generate a correction matrix based on the feature relationship recognition model;
the face feature recognition module 503 is configured to acquire an image to be recognized, and recognize the image to be recognized through the face feature recognition model to obtain a face feature of the user in the image to be recognized;
and the health state identification module 504 is configured to correct the facial features of the image to be identified based on the correction matrix, and identify the health state of the user according to the correction result.
Further, the first model training module 501 specifically includes:
the first labeling submodule is used for acquiring sample data from a preset database and labeling the sample data;
the first combination submodule is used for carrying out random combination on the labeled sample data to obtain a first training sample set and a first verification data set;
the first training submodule is used for importing a first training sample set into the initial feature recognition model for model training to obtain an initial face feature recognition model;
and the first verification submodule is used for verifying the initial face feature recognition model through the first verification data set and outputting the face feature recognition model passing the verification.
Further, the first verification unit specifically includes:
the first verification unit is used for importing a first verification data set into the initial human face feature recognition model and outputting a recognition result;
the first error calculation unit is used for performing fitting calculation by using a back propagation algorithm based on the identification result and a first preset standard result to obtain an identification error;
and the first iteration unit is used for comparing the identification error with a first preset error threshold, and if the identification error is greater than the first preset error threshold, iteratively updating the initial face feature recognition model until the identification error is less than or equal to the first preset error threshold, so as to obtain the face feature recognition model passing the verification.
Further, the second model training module 502 specifically includes:
the second labeling submodule is used for acquiring a training result of the face feature recognition model and labeling the training result to obtain a health state label;
the joint distribution probability calculation submodule is used for counting the number of the health state labels and calculating the joint distribution probability of each health state label based on the number of the health state labels;
the second combination submodule is used for combining the health state labels based on the joint distribution probability to obtain a second training sample set and a second verification data set;
the second training submodule is used for training the initial relationship recognition model according to a second training sample set to obtain a characteristic relationship recognition model;
and the correction matrix generation submodule is used for acquiring the output result of the characteristic relation recognition model and generating a correction matrix based on the output result.
Further, the second training unit specifically includes:
the second training unit is used for importing a second training sample set into the initial relationship recognition model for training to obtain an initial characteristic relationship recognition model;
the second verification unit is used for verifying the initial characteristic relation recognition model through a second verification data set and outputting a prediction result;
the second error calculation unit is used for performing fitting calculation by using a back propagation algorithm based on the prediction result and a second preset standard result to obtain a prediction error;
and the second iteration unit is used for comparing the prediction error with a second preset error threshold, if the prediction error is larger than the second preset error threshold, iteratively updating the initial characteristic relationship identification model until the prediction error is smaller than or equal to the second preset error threshold, and acquiring the verified characteristic relationship identification model.
Further, the second iteration unit specifically includes:
the loss function setting subunit is used for setting a loss function of the initial characteristic relationship identification model;
the second iteration subunit is used for carrying out iteration updating on the initial characteristic relation identification model based on the loss function until the output of the loss function reaches the minimum value;
and acquiring a characteristic relation identification model of which the output of the loss function reaches the minimum value.
Further, the facial feature recognition module 503 specifically includes:
the face area identification submodule is used for acquiring an image to be identified, scanning the image to be identified and identifying a face area in the image to be identified;
and the facial feature recognition submodule is used for carrying out feature recognition on the face region in the image to be recognized through the face feature recognition model to obtain the facial features of the user in the image to be recognized.
The application discloses a health state recognition device based on images, which belongs to the field of artificial intelligence, and generates a correction matrix of a human face feature recognition model based on a human face feature recognition model and a feature relationship recognition model which are constructed in advance. When the health state of the face is identified, extracting features through a pre-constructed face feature identification model to obtain the face features of the user, then correcting the face feature parameters of the user through a correction matrix, and finally identifying the health state of the face according to a correction result. The face health state recognition accuracy is improved, meanwhile, a large amount of computing resources and system resources are not occupied when the face characteristic parameters of the user are corrected through the correction matrix, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the face health state recognition method is convenient to deploy in a mobile terminal.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 6, fig. 6 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 6 comprises a memory 61, a processor 62, a network interface 63 communicatively connected to each other via a system bus. It is noted that only a computer device 6 having components 61-63 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 61 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 61 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. In other embodiments, the memory 61 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 6. Of course, the memory 61 may also comprise both an internal storage unit of the computer device 6 and an external storage device thereof. In this embodiment, the memory 61 is generally used for storing an operating system installed in the computer device 6 and various types of application software, such as computer readable instructions of the image-based health status identification method. Further, the memory 61 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 62 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 62 is typically used to control the overall operation of the computer device 6. In this embodiment, the processor 62 is configured to execute computer readable instructions stored in the memory 61 or process data, such as executing computer readable instructions of the image-based health status identification method.
The network interface 63 may comprise a wireless network interface or a wired network interface, and the network interface 63 is typically used for establishing a communication connection between the computer device 6 and other electronic devices.
The application discloses computer equipment, which belongs to the field of artificial intelligence, and the method generates a correction matrix of a face feature recognition model through a face feature recognition model and a feature relation recognition model which are constructed in advance and based on the feature relation recognition model. When the health state of the face is identified, extracting features through a pre-constructed face feature identification model to obtain the face features of the user, then correcting the face feature parameters of the user through a correction matrix, and finally identifying the health state of the face according to a correction result. The face health state recognition accuracy is improved, meanwhile, a large amount of computing resources and system resources are not occupied when the face characteristic parameters of the user are corrected through the correction matrix, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the face health state recognition method is convenient to deploy in a mobile terminal.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the image-based health status identification method as described above.
The application discloses a storage medium, which belongs to the field of artificial intelligence, and the method generates a correction matrix of a face feature recognition model through a face feature recognition model and a feature relation recognition model which are constructed in advance and based on the feature relation recognition model. When the health state of the face is identified, extracting features through a pre-constructed face feature identification model to obtain the face features of the user, then correcting the face feature parameters of the user through a correction matrix, and finally identifying the health state of the face according to a correction result. The face health state recognition accuracy is improved, meanwhile, a large amount of computing resources and system resources are not occupied when the face characteristic parameters of the user are corrected through the correction matrix, the system operation pressure and the system integration pressure are favorably reduced, the adaptability is strong, and the face health state recognition method is convenient to deploy in a mobile terminal.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An image-based health status recognition method, comprising:
acquiring a first training sample set from a preset database, and training a preset initial feature recognition model through the first training sample set to obtain a face feature recognition model;
acquiring a training result of the face feature recognition model, training an initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model;
acquiring an image to be recognized, and recognizing the image to be recognized through the human face feature recognition model to obtain the facial features of the user in the image to be recognized;
and correcting the face characteristics of the image to be recognized based on the correction matrix, and recognizing the health state of the user according to the correction result.
2. The image-based health status recognition method according to claim 1, wherein the step of obtaining a first training sample set from a preset database, training a preset initial feature recognition model through the first training sample set, and obtaining a face feature recognition model specifically comprises:
acquiring sample data from the preset database, and labeling the sample data;
randomly combining the labeled sample data to obtain a first training sample set and a first verification data set;
importing the first training sample set into the initial feature recognition model for model training to obtain an initial face feature recognition model;
and verifying the initial face feature recognition model through the first verification data set, and outputting the face feature recognition model passing verification.
3. The image-based health status recognition method of claim 2, wherein the step of verifying the initial face feature recognition model by the first verification data set and outputting the face feature recognition model that passes the verification comprises:
importing the first verification data set into the initial human face feature recognition model, and outputting a recognition result;
based on the identification result and a first preset standard result, performing fitting calculation by using a back propagation algorithm to obtain an identification error;
and comparing the identification error with a first preset error threshold, if the identification error is greater than the first preset error threshold, iteratively updating the initial face feature identification model until the identification error is less than or equal to the first preset error threshold, and acquiring the face feature identification model passing verification.
4. The image-based health status recognition method according to claim 1, wherein the step of acquiring a training result of the face feature recognition model, training an initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model specifically comprises:
acquiring a training result of the face feature recognition model, and labeling the training result to obtain a health state label;
counting the number of the health state labels, and calculating the joint distribution probability of each health state label based on the number of the health state labels;
combining the health state labels based on the joint distribution probability to obtain a second training sample set and a second verification data set;
training a preset initial relationship recognition model according to the second training sample set to obtain a characteristic relationship recognition model;
and acquiring an output result of the characteristic relation recognition model, and generating a correction matrix based on the output result.
5. The image-based health status recognition method according to claim 4, wherein the step of training a preset initial relationship recognition model according to the second training sample set to obtain a feature relationship recognition model specifically comprises:
importing the second training sample set into the preset initial relationship recognition model for training to obtain an initial characteristic relationship recognition model;
verifying the initial characteristic relation recognition model through the second verification data set, and outputting a prediction result;
based on the prediction result and a second preset standard result, performing fitting calculation by using a back propagation algorithm to obtain a prediction error;
and comparing the prediction error with a second preset error threshold, if the prediction error is larger than the second preset error threshold, iteratively updating the initial feature relationship identification model until the prediction error is smaller than or equal to the second preset error threshold, and acquiring the feature relationship identification model passing verification.
6. The image-based health status recognition method according to claim 5, wherein the step of iteratively updating the initial feature relationship recognition model until the prediction error is smaller than or equal to the second preset error threshold value, and obtaining the feature relationship recognition model that is verified includes:
setting a loss function of the initial characteristic relation recognition model;
iteratively updating the initial feature relationship identification model based on the loss function until the output of the loss function reaches a minimum value;
and acquiring the characteristic relation identification model with the output of the loss function reaching the minimum value.
7. The image-based health status recognition method according to any one of claims 1 to 6, wherein the step of obtaining the image to be recognized and recognizing the image to be recognized through the face feature recognition model to obtain the facial features of the user in the image to be recognized specifically comprises:
acquiring the image to be recognized, scanning the image to be recognized, and recognizing a face area in the image to be recognized;
and carrying out feature recognition on the face region in the image to be recognized through the face feature recognition model to obtain the face feature of the user in the image to be recognized.
8. An image-based health status recognition apparatus, comprising:
the system comprises a first model training module, a second model training module and a third model training module, wherein the first model training module is used for acquiring a first training sample set from a preset database, and training a preset initial feature recognition model through the first training sample set to obtain a face feature recognition model;
the second model training module is used for acquiring a training result of the face feature recognition model, training the initial relationship recognition model based on the training result to obtain a feature relationship recognition model, and generating a correction matrix based on the feature relationship recognition model;
the face feature recognition module is used for acquiring an image to be recognized and recognizing the image to be recognized through the face feature recognition model to obtain the face feature of the user in the image to be recognized;
and the health state identification module is used for correcting the face characteristics of the image to be identified based on the correction matrix and identifying the health state of the user according to a correction result.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed performs the steps of the image based health status recognition method of any one of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon, which when executed by a processor, implement the steps of the image based health status recognition method according to any one of claims 1 to 7.
CN202011286362.3A 2020-11-17 2020-11-17 Image-based health state identification method, device, equipment and storage medium Active CN112395979B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011286362.3A CN112395979B (en) 2020-11-17 2020-11-17 Image-based health state identification method, device, equipment and storage medium
PCT/CN2021/090417 WO2022105118A1 (en) 2020-11-17 2021-04-28 Image-based health status identification method and apparatus, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011286362.3A CN112395979B (en) 2020-11-17 2020-11-17 Image-based health state identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112395979A true CN112395979A (en) 2021-02-23
CN112395979B CN112395979B (en) 2024-05-10

Family

ID=74605823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011286362.3A Active CN112395979B (en) 2020-11-17 2020-11-17 Image-based health state identification method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112395979B (en)
WO (1) WO2022105118A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861788A (en) * 2021-03-10 2021-05-28 中电健康云科技有限公司 Method for judging health condition based on face color recognition technology
CN113158842A (en) * 2021-03-31 2021-07-23 中国工商银行股份有限公司 Identification method, system, device and medium
CN113361402A (en) * 2021-06-04 2021-09-07 北京百度网讯科技有限公司 Training method of recognition model, method, device and equipment for determining accuracy
WO2022105118A1 (en) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Image-based health status identification method and apparatus, device and storage medium
CN117031971A (en) * 2023-07-18 2023-11-10 东莞莱姆森科技建材有限公司 Intelligent furniture equipment adjusting method, device, equipment and medium based on intelligent mirror

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270993B (en) * 2022-08-23 2023-06-23 南通思诺船舶科技有限公司 Diesel engine set state detection method and system
CN117237800B (en) * 2023-08-01 2024-06-14 广州智在信息科技有限公司 Crop growth monitoring method based on artificial intelligence and computer equipment
CN117668528B (en) * 2024-02-01 2024-04-12 成都华泰数智科技有限公司 Natural gas voltage regulator fault detection method and system based on Internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
US20180289334A1 (en) * 2017-04-05 2018-10-11 doc.ai incorporated Image-based system and method for predicting physiological parameters
CN109492557A (en) * 2018-10-28 2019-03-19 北京工业大学 A kind of natural scene lower face expression recognition method based on the constraint of AU symbiosis
CN109685011A (en) * 2018-12-25 2019-04-26 北京华航无线电测量研究所 A kind of underground utilities detection recognition method based on deep learning
WO2019245085A1 (en) * 2018-06-22 2019-12-26 Jun Ho Park Method, apparatus and medium for performing 3d model creation of people and identification of people via model matching
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN111860362A (en) * 2020-07-24 2020-10-30 北京百度网讯科技有限公司 Method and device for generating human face image correction model and correcting human face image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509905B (en) * 2018-03-30 2020-06-19 百度在线网络技术(北京)有限公司 State evaluation device, electronic apparatus, and storage medium
CN109119163A (en) * 2018-07-16 2019-01-01 安徽昱康智能科技有限公司 Health detecting method and its system based on face recognition
CN109190535B (en) * 2018-08-23 2022-04-01 南京邮电大学 Face complexion analysis method and system based on deep learning
CN110729053B (en) * 2019-10-11 2023-02-03 深圳平安医疗健康科技服务有限公司 Data processing method, data processing device, computer equipment and storage medium
CN111612133B (en) * 2020-05-20 2021-10-19 广州华见智能科技有限公司 Internal organ feature coding method based on face image multi-stage relation learning
CN112395979B (en) * 2020-11-17 2024-05-10 平安科技(深圳)有限公司 Image-based health state identification method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180289334A1 (en) * 2017-04-05 2018-10-11 doc.ai incorporated Image-based system and method for predicting physiological parameters
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
WO2019245085A1 (en) * 2018-06-22 2019-12-26 Jun Ho Park Method, apparatus and medium for performing 3d model creation of people and identification of people via model matching
CN109492557A (en) * 2018-10-28 2019-03-19 北京工业大学 A kind of natural scene lower face expression recognition method based on the constraint of AU symbiosis
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN109685011A (en) * 2018-12-25 2019-04-26 北京华航无线电测量研究所 A kind of underground utilities detection recognition method based on deep learning
CN111860362A (en) * 2020-07-24 2020-10-30 北京百度网讯科技有限公司 Method and device for generating human face image correction model and correcting human face image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022105118A1 (en) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Image-based health status identification method and apparatus, device and storage medium
CN112861788A (en) * 2021-03-10 2021-05-28 中电健康云科技有限公司 Method for judging health condition based on face color recognition technology
CN113158842A (en) * 2021-03-31 2021-07-23 中国工商银行股份有限公司 Identification method, system, device and medium
CN113361402A (en) * 2021-06-04 2021-09-07 北京百度网讯科技有限公司 Training method of recognition model, method, device and equipment for determining accuracy
CN113361402B (en) * 2021-06-04 2023-08-18 北京百度网讯科技有限公司 Training method of recognition model, method, device and equipment for determining accuracy
CN117031971A (en) * 2023-07-18 2023-11-10 东莞莱姆森科技建材有限公司 Intelligent furniture equipment adjusting method, device, equipment and medium based on intelligent mirror
CN117031971B (en) * 2023-07-18 2024-04-19 东莞莱姆森科技建材有限公司 Intelligent furniture equipment adjusting method, device, equipment and medium based on intelligent mirror

Also Published As

Publication number Publication date
CN112395979B (en) 2024-05-10
WO2022105118A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
WO2021077984A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
US11487995B2 (en) Method and apparatus for determining image quality
CN112732911B (en) Semantic recognition-based speaking recommendation method, device, equipment and storage medium
CN111133453B (en) Artificial neural network
WO2020238353A1 (en) Data processing method and apparatus, storage medium, and electronic apparatus
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN112164002B (en) Training method and device of face correction model, electronic equipment and storage medium
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN110516734B (en) Image matching method, device, equipment and storage medium
CN112995414B (en) Behavior quality inspection method, device, equipment and storage medium based on voice call
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN112418059A (en) Emotion recognition method and device, computer equipment and storage medium
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN112668482B (en) Face recognition training method, device, computer equipment and storage medium
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
CN115050064A (en) Face living body detection method, device, equipment and medium
WO2023179429A1 (en) Video data processing method and apparatus, electronic device, and storage medium
CN112632244A (en) Man-machine conversation optimization method and device, computer equipment and storage medium
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN114611672A (en) Model training method, face recognition method and device
CN115510186A (en) Instant question and answer method, device, equipment and storage medium based on intention recognition
WO2022063076A1 (en) Adversarial example identification method and apparatus
CN114821736A (en) Multi-modal face recognition method, device, equipment and medium based on contrast learning
WO2023231753A1 (en) Neural network training method, data processing method, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant