CN113643283A - Method, device, equipment and storage medium for detecting aging condition of human body - Google Patents

Method, device, equipment and storage medium for detecting aging condition of human body Download PDF

Info

Publication number
CN113643283A
CN113643283A CN202111016533.5A CN202111016533A CN113643283A CN 113643283 A CN113643283 A CN 113643283A CN 202111016533 A CN202111016533 A CN 202111016533A CN 113643283 A CN113643283 A CN 113643283A
Authority
CN
China
Prior art keywords
facial feature
facial
feature
feature data
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111016533.5A
Other languages
Chinese (zh)
Inventor
黄祥博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ping An Medical Health Technology Service Co Ltd
Original Assignee
Ping An Medical and Healthcare Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Medical and Healthcare Management Co Ltd filed Critical Ping An Medical and Healthcare Management Co Ltd
Priority to CN202111016533.5A priority Critical patent/CN113643283A/en
Publication of CN113643283A publication Critical patent/CN113643283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses a method, a device, equipment and a storage medium for detecting aging conditions of a human body, and belongs to the technical field of artificial intelligence. The method comprises the steps of carrying out face feature recognition on face images of healthy people to obtain first face feature data, leading the first face feature data into a trained face feature extraction model, obtaining a face feature matrix of a target object, training a preset initial detection model based on the face feature matrix of the target object, obtaining a senescence condition detection model, obtaining face feature data of a user to be recognized when a senescence condition detection instruction is received, inputting the face feature data of the user to be recognized into the senescence condition detection model, and outputting a face senescence condition detection result of the user to be recognized. The present application also relates to blockchain techniques in which facial feature data of a user may be stored. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.

Description

Method, device, equipment and storage medium for detecting aging condition of human body
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a method, a device, equipment and a storage medium for detecting human aging conditions.
Background
With the development of science and technology and the improvement of human living standard, health becomes an important field which people pay more and more attention to, people delay aging speed through measures such as reasonable collocation of diet, regular work and rest, reasonable and scientific physical exercise and abundant health care products, however, aging is a natural rule that the life cycle of organisms is repeated, the determinant factor of aging from the medical point of view depends on the length of cell telomeres, when cells divide to generate new cells, the telomeres can be shortened until the telomeres reach a critical length, at the moment, the cells also lose activity and die, so the telomeres are shortened along with the aging of individual cells, but for the aging process of the medical profession, the individuals can hardly know in real time. Meanwhile, the factors really influencing human aging are many, including the factors such as the environment, the personal living habits, the emotional stress, the individual physique and the like.
Because people pursue health, people need to pay attention to their own conditions in real time to obtain their own aging conditions and aging speed, so that people can effectively resist aging, delay aging and reduce the aging speed, but at present people can only judge the aging speed of a person by means of expert experience or common general knowledge, for example, people go to some professional organizations to detect the aging speed through professional instruments, or compare images of the same person before and after one year to roughly judge the aging conditions, if some people have an obvious facial aging change, some people have a small change. Therefore, the existing human body aging condition detection scheme has the problems that the detection process is complex and the detection result is greatly influenced by subjective factors.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a computer device and a storage medium for detecting a human aging status, so as to solve the technical problems that the existing human aging status detection scheme has a complex detection process and the detection result is greatly influenced by subjective factors.
In order to solve the above technical problems, an embodiment of the present application provides a method for detecting a human aging status, which adopts the following technical solutions:
a method of detecting a condition of aging in a human, comprising:
acquiring a first sample image from a preset database, wherein the first sample image is a face image of a healthy person;
performing face feature recognition on the target object in the first sample image to obtain first face feature data;
importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object;
training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging condition detection model;
when receiving a ageing condition detection instruction, acquiring facial feature data of a user to be identified;
and inputting the facial feature data of the user to be identified into the aging condition detection model, and outputting the facial aging condition detection result of the user to be identified.
Further, the step of performing face feature recognition on the target object in the first sample image to obtain first face feature data specifically includes:
scanning the first sample image, and determining a face area of a target object in the first sample image;
performing region segmentation on the face region of the target object in the first sample image to obtain a region segmentation image;
and performing feature recognition on the region segmentation image to obtain the first face feature data.
Further, before the step of performing face feature recognition on the target object in the first sample image to obtain a plurality of first face feature data, importing the first face feature data into a trained face feature extraction model to obtain a face feature matrix of the target object, the method further includes:
assigning an initial feature weight to each of the first facial feature data;
calculating the actual feature weight of each first face feature data based on a preset feature weight algorithm;
combining the actual feature weights of all the first face feature data based on a preset combination strategy to obtain a feature weight combination;
and importing the feature weight combination into the facial feature extraction model.
Further, the step of calculating the actual feature weight of each of the first facial feature data based on a preset feature weight algorithm specifically includes:
classifying the first face characteristic data given with the initial weight to obtain a plurality of characteristic data combinations;
calculating the similarity of the facial feature data in the feature data combination of the same category to obtain a first similarity;
calculating the similarity of facial feature data between different types of feature data combinations to obtain a second similarity;
and adjusting the initial weight of the first face feature data based on the first similarity and the second similarity to obtain the actual feature weight of each first face feature data.
Further, the step of importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object specifically includes:
performing convolution operation on the first face characteristic data to obtain an initial characteristic matrix;
and performing matrix splicing on the initial feature matrix based on the feature weight combination to obtain a facial feature matrix of the target object.
Further, before the step of assigning an initial feature weight to each of the first facial feature data, the method further includes:
acquiring a second sample image from a preset database, and labeling the second sample image to obtain a facial feature label of the second sample image;
performing face feature recognition on the target object in the second sample image to obtain second face feature data;
importing the second facial feature data into a preset initial facial feature extraction model to obtain an initial feature extraction result;
and comparing the initial feature extraction result with the facial feature label, and adjusting the initial facial feature extraction model based on the comparison result to obtain the trained facial feature extraction model.
Further, the step of training a preset initial detection model based on the facial feature matrix of the target object to obtain a aging status detection model specifically includes:
importing the facial feature matrix of the target object into the initial detection model to obtain an initial feature detection result;
fitting by using a sequence back propagation algorithm based on the initial characteristic detection result and a preset standard aging condition label to obtain a prediction error;
and comparing the prediction error with a preset threshold, if the prediction error is larger than the preset threshold, iteratively updating the initial detection model until the prediction error is smaller than or equal to the preset threshold, and obtaining the aging condition detection model.
In order to solve the above technical problem, an embodiment of the present application further provides a device for detecting a human aging condition, which adopts the following technical solutions:
a device for detecting a state of aging of a human body, comprising:
the system comprises a first sample image acquisition module, a second sample image acquisition module and a third sample image acquisition module, wherein the first sample image acquisition module is used for acquiring a first sample image from a preset database, and the first sample image is a face image of a healthy person;
the first facial feature recognition module is used for carrying out facial feature recognition on the target object in the first sample image to obtain first facial feature data;
the facial feature matrix acquisition module is used for importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object;
the aging detection model training module is used for training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging condition detection model;
the user facial feature data module is used for acquiring facial feature data of the user to be identified when the aging condition detection instruction is received;
and the facial aging condition detection module is used for inputting the facial feature data of the user to be identified into the aging condition detection model and outputting the facial aging condition detection result of the user to be identified.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of the method of detecting a human aging condition as described above.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the method of detecting a human aging condition as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application discloses a method, a device, equipment and a storage medium for detecting aging conditions of a human body, and belongs to the technical field of artificial intelligence. The method comprises the steps of constructing a facial feature extraction model for extracting facial feature information of a user, constructing an aging condition detection model for analyzing and predicting the current aging condition of the user, extracting the facial feature information of the user in real time through a facial image and the facial feature extraction model of the user when the aging condition needs to be detected, inputting the extracted facial feature information of the user into the aging condition detection model, analyzing the input facial feature information through the aging condition detection model, and obtaining a facial aging condition detection result and an aging condition prediction result of the user. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 illustrates a flow diagram of one embodiment of a method of detecting a human aging condition according to the present application;
FIG. 3 illustrates a schematic structural diagram of one embodiment of a human aging condition detection apparatus according to the present application;
FIG. 4 shows a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer iii, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, for example, a background server that provides support for pages displayed on the terminal devices 101, 102, and 103, and may be an independent server, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
It should be noted that the method for detecting the aging status of the human body provided by the embodiment of the present application is generally executed by a server, and accordingly, the device for detecting the aging status of the human body is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow chart of one embodiment of a method of detection of a human aging condition in accordance with the present application is shown. The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like. The method for detecting the aging condition of the human body comprises the following steps:
s201, acquiring a first sample image from a preset database, wherein the first sample image is a face image of a healthy person.
Specifically, the server acquires a first sample image from a preset database, wherein the first sample image is a face image of a healthy person, and the first sample image is used for training the aging condition detection model. In a specific embodiment of the present application, the first sample image is a facial image of a healthy person with a degree of aging deviation less than a medical standard. It should be noted that, when training the aging condition detection model, only the facial image of the healthy person is used as the training set, and the model is trained positively, so that the aging condition detection model only needs to remember the facial features of the healthy person, but does not need to remember the facial features of the unhealthy person, the model structure can be simplified, and the server resources can be saved.
S202, carrying out face feature recognition on the target object in the first sample image to obtain first face feature data.
Specifically, the server performs face feature recognition on a target object in the first sample image through a face recognition technology to obtain first face feature data, wherein the face feature data comprises a face contour curve, a face skin wrinkle depth, a wrinkle density, a pore size, skin luster, skin color spot distribution, a color spot color, a color spot density and the like, and the face feature information can be obtained through analysis in one face image through the face recognition technology.
S203, importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object.
The facial feature extraction model is constructed based on a CNN Convolutional Neural Network (CNN), which is a kind of feed forward Neural network (fed Neural network) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional neural networks have a feature learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure thereof, and are also called "shift-invariant artificial neural networks". The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has stable effect and no additional characteristic engineering requirement on data, and the convolutional kernel parameter sharing in a convolutional layer and the sparsity of interlayer connection enable the convolutional neural network to learn grid-like topology (pixels and audio) features with small calculation amount.
Specifically, the server imports the first facial feature data into a trained facial feature extraction model, and performs convolution calculation on the first facial feature data to obtain a facial feature matrix of the target object.
And S204, training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging condition detection model.
The aging condition detection model is constructed based on an RNN Neural Network model, and a Recurrent Neural Network (RNN) is a Recurrent Neural Network (Recurrent Neural Network) which takes sequence data as input, recurses in the evolution direction of the sequence, and all nodes (Recurrent units) are connected in a chain manner. The recurrent neural network has memory, parameter sharing and graph completion (training completion), and thus has certain advantages in learning the nonlinear characteristics of a sequence. The recurrent neural network has applications in Natural Language Processing (NLP), such as speech recognition, Language modeling, machine translation, and other fields, and is also used for various time series predictions. A cyclic Neural Network constructed by introducing a Convolutional Neural Network (CNN) can process computer vision problems containing sequence input.
Specifically, the server inputs the facial feature matrix of the target object into a preset initial detection model, and trains the preset initial detection model based on the facial feature matrix of the target object to obtain the aging condition detection model. In another specific embodiment of the present application, the server may obtain age and gender information of the target object in advance, and use the age and gender information of the target object as an input of one path of the RNN neural network model, and perform iterative training on a preset initial detection model in combination with a facial feature matrix of the target object to obtain a aging status detection model capable of performing facial aging status time-series prediction analysis according to the facial feature information.
In a specific embodiment of the application, a camera device such as a mobile phone camera is used for collecting a personal facial image, wherein the personal facial image comprises a front image and a side image, the facial image input by a user is subjected to face recognition, age information and gender information are recognized, facial features of the face of the user are extracted through a CNN network, the extracted features comprise the depth of canthus fish tail, the depth of forehead wrinkles, the depth of cheek wrinkles, the gloss of the face and the like, the extracted age information, gender information and facial features are taken as continuous input of the RNN network, the degree of facial aging of the current user is calculated by the RNN network, the facial aging condition of the user is predicted in a time sequence, then the health indexes of the current age and gender of healthy people are combined for comparison, the facial aging deviation data of the user are generated, and the intervals of the facial feature value of the current user in the general model features are matched, if the aging degree is not in the range of the characteristic interval of the health model, the aging degree is deviated, and the deviation degree of each characteristic value is given.
S205, when the aging status detection instruction is received, facial feature data of the user to be identified is acquired.
Specifically, when the server receives a senescence condition detection instruction uploaded by the client, the server uses the camera device of the client to collect personal facial images of the user, the personal facial images comprise front images and side images, and the facial feature data of the user to be identified is acquired by carrying out face identification on the personal facial images of the user based on the senescence condition detection instruction.
In this embodiment, the electronic device (for example, the server shown in fig. 1) on which the method for detecting the aging condition of the human body operates may receive the aging condition detection instruction through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
S206, inputting the facial feature data of the user to be identified into the aging condition detection model, and outputting the facial aging condition detection result of the user to be identified.
Specifically, the server inputs the facial feature data of the user to be identified into a pre-trained aging condition detection model, and performs time sequence prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model to obtain a facial aging condition detection result and an aging condition prediction result of the user. And then comparing the health indexes of the current age and gender of healthy people to generate user facial aging deviation data, matching the facial characteristic values of the current user in the general model characteristic range, indicating that the aging degree is deviated and giving the deviation degree of each characteristic value, wherein the facial characteristic values of the current user are not in the health model characteristic range.
It should be noted that, because the adoption is that the healthy face data is collected manually and the characteristics are extracted to complete the training of the initialization model, the data volume of the model is relatively limited, the relative deviation of the prediction granularity and the accuracy is relatively large, after the normal use of the user, when the predicted face data characteristics of the crowd belonging to the normal health range are added into the algorithm of the model for continuous machine learning, the more the crowd is used, the more the healthy face data indexes are accumulated, and the precision of the model is continuously improved.
In the above embodiment, the facial aging condition detection result and the aging condition prediction result of the user are obtained by constructing a facial feature extraction model for extracting facial feature information of the user, constructing an aging condition detection model for analyzing and predicting the current aging condition of the user, extracting facial feature information of the user in real time through a facial image of the user and the facial feature extraction model when the aging condition detection is required, then inputting the extracted facial feature information of the user to the aging condition detection model, and performing time-series prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
Further, the step of performing face feature recognition on the target object in the first sample image to obtain first face feature data specifically includes:
scanning the first sample image, and determining a face area of a target object in the first sample image;
performing region segmentation on the face region of the target object in the first sample image to obtain a region segmentation image;
and performing feature recognition on the region segmentation image to obtain the first face feature data.
Specifically, the server performs global scanning on the first sample image to determine a face area of a target object in the first sample image, performs area segmentation on the face area in the first sample image according to facial features and facial contour line features to obtain area segmentation to obtain segmented area images, such as eye area images, nose area images and the like, and finally performs feature recognition on the segmented area images one by one to obtain first face feature data, wherein the face feature data comprises a face contour curve, facial skin wrinkle depth, wrinkle density, pore size, skin luster, skin color spot distribution, color spot color, color spot density and the like, and the face feature data can be obtained through a trained face feature extraction model.
In the above embodiment, the first sample image is subjected to face region and region segmentation to obtain region images of each part of the face, and then the features of the region images of each part are respectively extracted by a trained facial feature extraction model to obtain first facial feature data.
Further, before the step of performing face feature recognition on the target object in the first sample image to obtain a plurality of first face feature data, importing the first face feature data into a trained face feature extraction model to obtain a face feature matrix of the target object, the method further includes:
assigning an initial feature weight to each of the first facial feature data;
calculating the actual feature weight of each first face feature data based on a preset feature weight algorithm;
combining the actual feature weights of all the first face feature data based on a preset combination strategy to obtain a feature weight combination;
and importing the feature weight combination into the facial feature extraction model.
In order to ensure the accuracy of the trained aging condition detection model, the influence weights of various facial feature data on the aging condition are not completely consistent when the facial features are acquired, so that the corresponding weights of the facial feature data need to be calculated through a feature weight algorithm before the training of the aging condition detection model.
Specifically, the server performs face feature recognition on the target object in the first sample image to obtain a plurality of first face feature data, the server assigns an initial feature weight to each first face feature data, if the feature weight is '0.5', adjusting the initial feature weight of each first face feature data based on a preset feature weight algorithm to obtain an actual feature weight, combining the actual feature weights of all the first face feature data based on a preset combination strategy to obtain a feature weight combination, importing the feature weight combination into a face feature extraction model, and using the feature weight combination for training a senescence condition detection model, the influence weight of various facial feature data on the aging condition can be fully considered when the aging condition detection model detects the aging condition, and the accuracy of the aging detection is ensured.
Further, the step of calculating the actual feature weight of each of the first facial feature data based on a preset feature weight algorithm specifically includes:
classifying the first face characteristic data given with the initial weight to obtain a plurality of characteristic data combinations;
calculating the similarity of the facial feature data in the feature data combination of the same category to obtain a first similarity;
calculating the similarity of facial feature data between different types of feature data combinations to obtain a second similarity;
and adjusting the initial weight of the first face feature data based on the first similarity and the second similarity to obtain the actual feature weight of each first face feature data.
Wherein, a characteristic weight algorithm (Relief algorithm) randomly selects a sample R from any emotional characteristic combination D, then searches a sample H which is nearest to the sample R from the D, the sample H is called Near Hit, searches a sample M which is nearest to the sample R from other emotional characteristic combinations, the sample M is called Near Miss, and then the weight of each characteristic is updated according to the following rules: if the distance between R and Near Hit on a certain feature is smaller than the distance between R and Near Miss, namely the similarity between two emotional features, the feature is beneficial to distinguishing the nearest neighbors of the same class and different classes, and the weight of the feature is increased; conversely, if the distance between R and Near Hit in a feature is greater than the distance between R and Near Miss, indicating that the feature has a negative effect on distinguishing between similar and dissimilar nearest neighbors, the weight of the feature is reduced. Repeating the above processes m times to finally obtain the average weight of each feature, wherein the larger the weight of the feature is, the stronger the classification capability of the feature is, and conversely, the weaker the classification capability of the feature is. The running time of the Relief algorithm is increased linearly along with the increase of the sampling times m of the samples and the number N of the original features, so that the running efficiency is very high.
Specifically, the server classifies the first facial feature data given with the initial weight to obtain a plurality of feature data combinations, and in a specific embodiment of the present application, the first facial feature data may be classified according to five sense organs of the human body, such as an eye feature data combination, a nose feature data combination, and the like. And finally, adjusting the initial weight of the first face feature data based on the first similarity and the second similarity to obtain the actual feature weight of each first face feature data. For example, when the first similarity is greater than or equal to the second similarity, the weight of the corresponding first face feature data is adjusted up to obtain the actual feature weight of the first face feature data.
In the above embodiment, when the facial features are acquired, it is considered that the influence weights of various facial feature data on the aging condition are not completely consistent, so the application calculates the actual feature weight of each first facial feature data through a preset feature weight algorithm, so that the trained aging condition detection model can fully consider the influence weights of various facial feature data on the aging condition, and ensure the accuracy of aging detection.
Further, the step of importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object specifically includes:
performing convolution operation on the first face characteristic data to obtain an initial characteristic matrix;
and performing matrix splicing on the initial feature matrix based on the feature weight combination to obtain a facial feature matrix of the target object.
Specifically, the facial feature extraction model comprises a convolutional layer and a full-link layer, wherein the convolutional layer extracts features from each feature data, and the full-link layer is responsible for combining all local features into a global feature. The server performs convolution operation on the first face feature data on the convolution layer to obtain an initial feature matrix, and performs matrix splicing on the initial feature matrix on the full connection layer based on feature weight combination to obtain a face feature matrix of the target object.
In the embodiment of the application, the convolutional layer in the CNN convolutional neural network model contains a preset convolutional kernel, and by introducing the facial feature data into the convolutional layer, the convolutional layer can perform convolutional operation according to the preset convolutional kernel to obtain an initial feature matrix, and then perform matrix splicing on the initial feature matrix based on feature weight combination in the full connection layer to obtain the facial feature matrix of the target object.
It should be noted that, in the convolution calculation process, for an m × n matrix, taking 1-dimensional convolution as an example, an x × n convolution kernel is constructed, and the convolution kernel performs sliding operation on the original matrix. For example, if m is 5 and x is 1, the convolution kernel is slid from top to bottom, x is first multiplied by the n-dimensional vector in the first row and summed to obtain a value, and then x is continuously slid down to perform convolution operations with the 2 nd and 3 rd rows … to obtain a 5 × 1 matrix, which is the convolution result.
In the embodiment, a facial feature extraction model is constructed through a CNN convolutional neural network, convolution operation is performed on first facial feature data through a convolution layer of the facial feature extraction model to obtain features of all dimensions of a facial region, and the features of all dimensions are spliced through a full connection layer of the facial feature extraction model to obtain a feature matrix capable of completely representing facial feature information of a user.
Further, before the step of assigning an initial feature weight to each of the first facial feature data, the method further includes:
acquiring a second sample image from a preset database, and labeling the second sample image to obtain a facial feature label of the second sample image;
performing face feature recognition on the target object in the second sample image to obtain second face feature data;
importing the second facial feature data into a preset initial facial feature extraction model to obtain an initial feature extraction result;
and comparing the initial feature extraction result with the facial feature label, and adjusting the initial facial feature extraction model based on the comparison result to obtain the trained facial feature extraction model.
Specifically, the server obtains a second sample image from a preset database, where the second sample image may be a face image under any health condition, and the second sample image is used for training a facial feature extraction model. The server obtains a facial feature label of the second sample image by labeling the second sample image, performs face feature recognition on a target object in the second sample image to obtain second facial feature data, and imports the second facial feature data into a preset initial facial feature extraction model to obtain an initial feature extraction result, wherein the initial feature extraction result is a feature prediction label aiming at the second sample image and output by the initial facial feature extraction model. And finally, the server compares the initial feature extraction result with the face feature label and adjusts the initial face feature extraction model by adopting a back propagation algorithm based on the comparison result to obtain the trained face feature extraction model.
The back propagation algorithm, namely the back propagation algorithm (BP algorithm), is a learning algorithm suitable for a multi-layer neuron network, and is established on the basis of a gradient descent method and used for error calculation of a deep learning network. The input and output relationship of the BP network is essentially a mapping relationship: an n-input m-output BP neural network performs the function of continuous mapping from n-dimensional euclidean space to a finite field in m-dimensional euclidean space, which is highly non-linear. The learning process of the BP algorithm consists of a forward propagation process and a backward propagation process. In the forward propagation process, input information passes through the hidden layer through the input layer, is processed layer by layer and is transmitted to the output layer, the backward propagation is converted, the partial derivatives of the target function to the weight of each neuron are calculated layer by layer, and the gradient of the target function to the weight vector is formed to be used as the basis for modifying the weight.
In the above embodiment, the server obtains the second sample image, performs face feature recognition on the obtained second sample image to obtain second facial feature data, and trains a model capable of implementing face feature recognition and extraction through the second facial feature data.
Further, the step of training a preset initial detection model based on the facial feature matrix of the target object to obtain a aging status detection model specifically includes:
importing the facial feature matrix of the target object into the initial detection model to obtain an initial feature detection result;
fitting by using a sequence back propagation algorithm based on the initial characteristic detection result and a preset standard aging condition label to obtain a prediction error;
and comparing the prediction error with a preset threshold, if the prediction error is larger than the preset threshold, iteratively updating the initial detection model until the prediction error is smaller than or equal to the preset threshold, and obtaining the aging condition detection model.
Specifically, the server imports a facial feature matrix of the target object into an initial detection model, calculates a facial aging degree of the target object by the RNN, generates an initial feature detection result, performs fitting by using a sequence back propagation algorithm based on the initial feature detection result and a facial aging condition detection result of a preset standard, obtains a prediction error, compares the prediction error with a preset threshold, and if the prediction error is greater than the preset threshold, iteratively updates the initial detection model until the prediction error is less than or equal to the preset threshold, thereby obtaining the aging condition detection model. Wherein the facial aging condition detection result with the preset standard is a labeling result of the first sample image according to the medical standard.
Among them, the sequence-propagation-through time (BPTT) algorithm is a commonly used method for training RNN, and is actually also the BP algorithm, but RNN processes time-series data, so time-based back propagation is required, and therefore back propagation is performed over time. The central idea of the BPTT is the same as that of the BP algorithm, and better points are continuously searched along the direction of the negative gradient of the parameter needing to be optimized until convergence.
The application discloses a method for detecting human body aging condition, and belongs to the technical field of artificial intelligence. The method comprises the steps of establishing a facial feature extraction model through a CNN convolutional neural network for extracting facial feature information of a user, establishing an aging condition detection model through an RNN neural network for analyzing and predicting the current aging condition of the user, extracting the facial feature information of the user in real time through a facial image and the facial feature extraction model of the user when the aging condition needs to be detected, inputting the extracted facial feature information of the user into the aging condition detection model, and performing time sequence prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model to obtain a facial aging condition detection result and an aging condition prediction result of the user. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
It is emphasized that, in order to further ensure the privacy and security of the facial feature data of the user, the facial feature data of the user may also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a device for detecting aging status of a human body, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied to various electronic devices.
As shown in fig. 3, the apparatus for detecting aging status of a human body according to the present embodiment includes:
a first sample image obtaining module 301, configured to obtain a first sample image from a preset database, where the first sample image is a face image of a healthy person;
a first facial feature recognition module 302, configured to perform facial feature recognition on the target object in the first sample image to obtain first facial feature data;
a facial feature matrix obtaining module 303, configured to import the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object;
a aging detection model training module 304, configured to train a preset initial detection model based on the facial feature matrix of the target object to obtain an aging status detection model;
a user facial feature data module 305 for acquiring facial feature data of a user to be identified when the aging condition detection instruction is received;
the facial aging condition detection module 306 is used for inputting the facial feature data of the user to be identified into the aging condition detection model and outputting the facial aging condition detection result of the user to be identified.
Further, the module 302 for recognizing the first facial feature specifically includes:
a sample image scanning unit, configured to scan the first sample image and determine a face area of a target object in the first sample image;
an image region segmentation unit, configured to perform region segmentation on a face region of a target object in the first sample image to obtain a region segmentation image;
and the first surface feature recognition unit is used for performing feature recognition on the region segmentation image to obtain the first surface feature data.
Further, the device for detecting aging status of human body further comprises:
an initial feature weight assignment module, configured to assign an initial feature weight to each of the first facial feature data;
the actual characteristic weight calculation module is used for calculating the actual characteristic weight of each piece of first face characteristic data based on a preset characteristic weight algorithm;
the actual characteristic weight combination module is used for combining the actual characteristic weights of all the first face characteristic data based on a preset combination strategy to obtain a characteristic weight combination;
and the characteristic weight combination importing module is used for importing the characteristic weight combination into the facial characteristic extraction model.
Further, the actual feature weight calculation module specifically includes:
the characteristic data classification unit is used for classifying the first surface characteristic data given with the initial weight to obtain a plurality of characteristic data combinations;
the first similarity calculation unit is used for calculating the similarity of the face feature data in the feature data combination of the same category to obtain a first similarity;
the second similarity calculation unit is used for calculating the similarity of the facial feature data between different types of feature data combinations to obtain a second similarity;
and the initial weight adjusting unit is used for adjusting the initial weight of the first face feature data based on the first similarity and the second similarity to obtain the actual feature weight of each first face feature data.
Further, the facial feature matrix obtaining module 303 specifically includes:
the convolution operation unit is used for performing convolution operation on the first surface characteristic data to obtain an initial characteristic matrix;
and the matrix splicing unit is used for carrying out matrix splicing on the initial feature matrix based on the feature weight combination to obtain the facial feature matrix of the target object.
Further, the device for detecting the aging condition of the human body further comprises:
the second sample image acquisition module is used for acquiring a second sample image from a preset database, and labeling the second sample image to obtain a facial feature label of the second sample image;
the second facial feature recognition module is used for carrying out facial feature recognition on the target object in the second sample image to obtain second facial feature data;
the feature extraction model training module is used for importing the second facial feature data into a preset initial facial feature extraction model to obtain an initial feature extraction result;
and the feature extraction model iteration module is used for comparing the initial feature extraction result with the facial feature label and adjusting the initial facial feature extraction model based on the comparison result to obtain the trained facial feature extraction model.
Further, the aging detection model training module 304 specifically includes:
the aging detection model training unit is used for importing the facial feature matrix of the target object into the initial detection model to obtain an initial feature detection result;
the sequence back propagation fitting unit is used for fitting by using a sequence back propagation algorithm based on the initial characteristic detection result and a preset standard aging condition label to obtain a prediction error;
and the aging detection model iteration unit is used for comparing the prediction error with a preset threshold, and if the prediction error is larger than the preset threshold, the initial detection model is iteratively updated until the prediction error is smaller than or equal to the preset threshold, so that the aging condition detection model is obtained.
The application discloses detection apparatus for human ageing situation belongs to artificial intelligence technical field. The method comprises the steps of establishing a facial feature extraction model through a CNN convolutional neural network for extracting facial feature information of a user, establishing an aging condition detection model through an RNN neural network for analyzing and predicting the current aging condition of the user, extracting the facial feature information of the user in real time through a facial image and the facial feature extraction model of the user when the aging condition needs to be detected, inputting the extracted facial feature information of the user into the aging condition detection model, and performing time sequence prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model to obtain a facial aging condition detection result and an aging condition prediction result of the user. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only computer device 4 having components 41-43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as computer readable instructions of a method for detecting a human aging condition. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the method for detecting the aging condition of the human body.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The application discloses equipment belongs to artificial intelligence technical field. The method comprises the steps of establishing a facial feature extraction model through a CNN convolutional neural network for extracting facial feature information of a user, establishing an aging condition detection model through an RNN neural network for analyzing and predicting the current aging condition of the user, extracting the facial feature information of the user in real time through a facial image and the facial feature extraction model of the user when the aging condition needs to be detected, inputting the extracted facial feature information of the user into the aging condition detection model, and performing time sequence prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model to obtain a facial aging condition detection result and an aging condition prediction result of the user. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the method for detecting a human aging condition as described above.
The application discloses a storage medium belongs to artificial intelligence technical field. The method comprises the steps of establishing a facial feature extraction model through a CNN convolutional neural network for extracting facial feature information of a user, establishing an aging condition detection model through an RNN neural network for analyzing and predicting the current aging condition of the user, extracting the facial feature information of the user in real time through a facial image and the facial feature extraction model of the user when the aging condition needs to be detected, inputting the extracted facial feature information of the user into the aging condition detection model, and performing time sequence prediction analysis on the input facial feature information through a hidden layer of the aging condition detection model to obtain a facial aging condition detection result and an aging condition prediction result of the user. The method and the device simplify the aging condition detection process, reduce the influence of subjective factors, and improve the accuracy of aging condition detection.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A method for detecting a state of aging in a human body, comprising:
acquiring a first sample image from a preset database, wherein the first sample image is a face image of a healthy person;
performing face feature recognition on the target object in the first sample image to obtain first face feature data;
importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object;
training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging condition detection model;
when receiving a ageing condition detection instruction, acquiring facial feature data of a user to be identified;
and inputting the facial feature data of the user to be identified into the aging condition detection model, and outputting the facial aging condition detection result of the user to be identified.
2. The method for detecting aging status of human body as claimed in claim 1, wherein the step of performing face feature recognition on the target object in the first sample image to obtain the first face feature data specifically comprises:
scanning the first sample image, and determining a face area of a target object in the first sample image;
performing region segmentation on the face region of the target object in the first sample image to obtain a region segmentation image;
and performing feature recognition on the region segmentation image to obtain the first face feature data.
3. The method for detecting aging status of human body as claimed in claim 1, wherein the step of performing face feature recognition on the target object in the first sample image to obtain a plurality of first facial feature data, and before the step of importing the first facial feature data into the trained facial feature extraction model to obtain the facial feature matrix of the target object, further comprises:
assigning an initial feature weight to each of the first facial feature data;
calculating the actual feature weight of each first face feature data based on a preset feature weight algorithm;
combining the actual feature weights of all the first face feature data based on a preset combination strategy to obtain a feature weight combination;
and importing the feature weight combination into the facial feature extraction model.
4. The method for detecting aging status of human body as claimed in claim 3, wherein the step of calculating the actual feature weight of each of the first facial feature data based on a predetermined feature weight algorithm specifically comprises:
classifying the first face characteristic data given with the initial weight to obtain a plurality of characteristic data combinations;
calculating the similarity of the facial feature data in the feature data combination of the same category to obtain a first similarity;
calculating the similarity of facial feature data between different types of feature data combinations to obtain a second similarity;
and adjusting the initial weight of the first face feature data based on the first similarity and the second similarity to obtain the actual feature weight of each first face feature data.
5. The method for detecting aging status of human body as claimed in claim 3, wherein the step of importing the first facial feature data into a trained facial feature extraction model to obtain the facial feature matrix of the target object comprises:
performing convolution operation on the first face characteristic data to obtain an initial characteristic matrix;
and performing matrix splicing on the initial feature matrix based on the feature weight combination to obtain a facial feature matrix of the target object.
6. The method of detecting a human aging condition of claim 3, further comprising, before the step of assigning an initial feature weight to each of the first facial feature data:
acquiring a second sample image from a preset database, and labeling the second sample image to obtain a facial feature label of the second sample image;
performing face feature recognition on the target object in the second sample image to obtain second face feature data;
importing the second facial feature data into a preset initial facial feature extraction model to obtain an initial feature extraction result;
and comparing the initial feature extraction result with the facial feature label, and adjusting the initial facial feature extraction model based on the comparison result to obtain the trained facial feature extraction model.
7. The method for detecting aging status of human body as claimed in any one of claims 1 to 6, wherein the step of training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging status detection model specifically comprises:
importing the facial feature matrix of the target object into the initial detection model to obtain an initial feature detection result;
fitting by using a sequence back propagation algorithm based on the initial characteristic detection result and a preset standard aging condition label to obtain a prediction error;
and comparing the prediction error with a preset threshold, if the prediction error is larger than the preset threshold, iteratively updating the initial detection model until the prediction error is smaller than or equal to the preset threshold, and obtaining the aging condition detection model.
8. A device for detecting a state of aging of a human body, comprising:
the system comprises a first sample image acquisition module, a second sample image acquisition module and a third sample image acquisition module, wherein the first sample image acquisition module is used for acquiring a first sample image from a preset database, and the first sample image is a face image of a healthy person;
the first facial feature recognition module is used for carrying out facial feature recognition on the target object in the first sample image to obtain first facial feature data;
the facial feature matrix acquisition module is used for importing the first facial feature data into a trained facial feature extraction model to obtain a facial feature matrix of the target object;
the aging detection model training module is used for training a preset initial detection model based on the facial feature matrix of the target object to obtain an aging condition detection model;
the user facial feature data module is used for acquiring facial feature data of the user to be identified when the aging condition detection instruction is received;
and the facial aging condition detection module is used for inputting the facial feature data of the user to be identified into the aging condition detection model and outputting the facial aging condition detection result of the user to be identified.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed performs the steps of the method of detecting a human aging condition of any one of claims 1 to 7.
10. A computer-readable storage medium, having computer-readable instructions stored thereon, which, when executed by a processor, implement the steps of the method of detecting a human aging condition as claimed in any one of claims 1 to 7.
CN202111016533.5A 2021-08-31 2021-08-31 Method, device, equipment and storage medium for detecting aging condition of human body Pending CN113643283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111016533.5A CN113643283A (en) 2021-08-31 2021-08-31 Method, device, equipment and storage medium for detecting aging condition of human body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016533.5A CN113643283A (en) 2021-08-31 2021-08-31 Method, device, equipment and storage medium for detecting aging condition of human body

Publications (1)

Publication Number Publication Date
CN113643283A true CN113643283A (en) 2021-11-12

Family

ID=78424675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016533.5A Pending CN113643283A (en) 2021-08-31 2021-08-31 Method, device, equipment and storage medium for detecting aging condition of human body

Country Status (1)

Country Link
CN (1) CN113643283A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661052A (en) * 2022-10-13 2023-01-31 高峰医疗器械(无锡)有限公司 Alveolar bone detection method, alveolar bone detection device, alveolar bone detection equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661052A (en) * 2022-10-13 2023-01-31 高峰医疗器械(无锡)有限公司 Alveolar bone detection method, alveolar bone detection device, alveolar bone detection equipment and storage medium
CN115661052B (en) * 2022-10-13 2023-09-12 高峰医疗器械(无锡)有限公司 Alveolar bone detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210012198A1 (en) Method for training deep neural network and apparatus
JP2021532499A (en) Machine learning-based medical data classification methods, devices, computer devices and storage media
US11704500B2 (en) Techniques to add smart device information to machine learning for increased context
CN111783902B (en) Data augmentation, service processing method, device, computer equipment and storage medium
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
CN110705428B (en) Facial age recognition system and method based on impulse neural network
CN112418059B (en) Emotion recognition method and device, computer equipment and storage medium
CN112418292A (en) Image quality evaluation method and device, computer equipment and storage medium
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
CN113722474A (en) Text classification method, device, equipment and storage medium
CN113705534A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on deep vision
CN111401105A (en) Video expression recognition method, device and equipment
CN112529149A (en) Data processing method and related device
CN115879508A (en) Data processing method and related device
CN115099326A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on artificial intelligence
CN113723077B (en) Sentence vector generation method and device based on bidirectional characterization model and computer equipment
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
CN114241459A (en) Driver identity verification method and device, computer equipment and storage medium
CN112995414B (en) Behavior quality inspection method, device, equipment and storage medium based on voice call
CN113793256A (en) Animation character generation method, device, equipment and medium based on user label
CN113643283A (en) Method, device, equipment and storage medium for detecting aging condition of human body
CN115860835A (en) Advertisement recommendation method, device and equipment based on artificial intelligence and storage medium
CN115392361A (en) Intelligent sorting method and device, computer equipment and storage medium
CN111582404B (en) Content classification method, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220525

Address after: 518000 China Aviation Center 2901, No. 1018, Huafu Road, Huahang community, Huaqiang North Street, Futian District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Ping An medical and Health Technology Service Co.,Ltd.

Address before: Room 12G, Area H, 666 Beijing East Road, Huangpu District, Shanghai 200001

Applicant before: PING AN MEDICAL AND HEALTHCARE MANAGEMENT Co.,Ltd.