CN114842972A - Method, device, electronic equipment and medium for determining user state - Google Patents

Method, device, electronic equipment and medium for determining user state Download PDF

Info

Publication number
CN114842972A
CN114842972A CN202210414369.1A CN202210414369A CN114842972A CN 114842972 A CN114842972 A CN 114842972A CN 202210414369 A CN202210414369 A CN 202210414369A CN 114842972 A CN114842972 A CN 114842972A
Authority
CN
China
Prior art keywords
user
state
organ
detected
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210414369.1A
Other languages
Chinese (zh)
Inventor
许亮亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202210414369.1A priority Critical patent/CN114842972A/en
Publication of CN114842972A publication Critical patent/CN114842972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Primary Health Care (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a medium for determining a user state. In the application, a user body image shot by a target user by using a camera device on a client can be received, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. By applying the technical scheme of the application, a user can take a picture of a certain specific organ of the user, and the detection system can determine and display the state index of the user for the user pertinence through the image of the organ to be detected of the user and the preset image detection model. Therefore, the problem that the user cannot timely know the self health state in an early stage in the related technology is solved.

Description

Method, device, electronic equipment and medium for determining user state
Technical Field
The present application relates to data processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for determining a user status.
Background
Modern people are generally forgetful of their health due to work and life. This has long led to accidents, and even serious persons can seriously affect the working and living conditions.
Wherein, because some problems influencing the working and living states of the user are discovered early, the influence on the physical health of the user and the influence on the life and the working can be controlled to the maximum extent. Therefore, how to design a method by which a user can determine the self state in real time becomes a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a medium for determining a user state. The method is used for solving the problem that the user cannot timely know the self health state in an early stage in the related technology.
According to an aspect of the embodiments of the present application, a method for determining a user status is provided, including:
receiving a user body image shot by a target user by using a camera device on a client, wherein the user body image comprises an organ to be detected of the target user;
inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed by a decision tree model and a random forest algorithm;
and determining the state index of the target user according to the state parameter corresponding to the organ to be detected.
Alternatively, in another embodiment based on the above method of the present application, the organ to be detected corresponds to at least one of the oral cavity, the five sense organs, the face, and the skin.
Optionally, in another embodiment based on the foregoing method of the present application, before the receiving the human body image of the user captured by the target user using the image capturing device on the client, the method further includes:
acquiring sample human body images and sample pathological images of a plurality of users, wherein the sample human body images and the sample pathological images comprise the same organ to be detected, and the sample human body images and the sample pathological images correspond to different physiological states;
training an initial decision tree model by using the plurality of sample human body images and the sample pathological images until a decision tree model with convergent training is obtained;
and continuously optimizing and training the decision tree model with the converged training through a random forest algorithm, the sample human body image and the sample pathological image until the state prediction model is obtained.
Optionally, in another embodiment based on the method of the present application, the inputting the human body image of the user into a preset state prediction model to obtain a state parameter corresponding to the organ to be detected includes:
and performing feature recognition on the organ to be detected of the human body image of the user by using the state prediction model to obtain the state parameter, wherein the state parameter corresponds to at least one of a size feature, a color feature and a contour feature.
Optionally, in another embodiment based on the above method of the present application, the determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected includes:
calling a preset health detection curve for representing the user state;
determining the state index of the target user according to the matching result of the state parameter and the health detection curve;
wherein the health detection curve is constructed by the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwC)+n*sin(dwA);
wherein, X is a horizontal axis of the health detection curve, Y is a vertical axis of the health detection curve, t is used for representing time, n is used for representing the state index, dwA is a color parameter of the organ image to be detected, dwB is a contour parameter of the organ image to be detected, and dwC is a size parameter of the organ image to be detected.
Optionally, in another embodiment based on the above method of the present application, after the inputting the human body image of the user into a preset state prediction model to obtain a state parameter corresponding to the organ to be detected, the method further includes:
acquiring a preset voice detection circulating network;
performing feature recognition on voice data of the target user by utilizing a voice detection circulation network to obtain voice parameters of the target user of the user, wherein the voice parameters correspond to at least one of tone, volume and tone;
and determining the state index of the target user according to the voice parameter and the state parameter corresponding to the organ to be detected.
Optionally, in another embodiment based on the above method of the present application, after the determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected, the method further includes:
acquiring a page parameter which is interested by the target user based on historical data, wherein the page parameter corresponds to at least one of the size of a display page, the color of the display page and the outline of the display page;
and determining the display mode preference of the target user by using the page parameters, and displaying the state index in a display mode corresponding to the display mode preference.
According to another aspect of the embodiments of the present application, there is provided an apparatus for determining a user status, including:
the receiving module is configured to receive a user body image shot by a target user by using a camera device on a client, wherein the user body image comprises an organ to be detected of the target user;
the generation module is configured to input the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm;
and the determining module is configured to determine the state index of the target user according to the state parameter corresponding to the organ to be detected.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for communicating with the memory to execute the executable instructions to perform the operations of any of the above-described methods of determining a user state.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the methods for determining a user status described above.
In the application, a user body image shot by a target user by using a camera device on a client can be received, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. By applying the technical scheme of the application, a user can take a picture of a certain specific organ, so that the detection system can determine the state index of the user for the user according to the pertinence and display the state index for the user through the image containing the organ to be detected of the user and the preset image detection model. Therefore, the problem that the user cannot timely know the self health state in an early stage in the related technology is solved.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of a method for determining a user status according to the present application;
fig. 2 is a user body image including an organ to be detected of a target user proposed by the present application;
fig. 3 is a schematic structural diagram of an electronic device for determining a user status according to the present application;
fig. 4 is a schematic structural diagram of an electronic device for determining a user status according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing a determination of a user status according to an exemplary embodiment of the present application is described below in conjunction with fig. 1-2. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The application also provides a method, a device, electronic equipment and a medium for determining the user state.
Fig. 1 schematically shows a flowchart of a method for determining a user status according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, receiving a user human body image shot by a target user by using a camera device on a client, wherein the user human body image comprises an organ to be detected of the target user.
In order to avoid the problems that in the prior art, a plurality of diseases of a user are delayed or even forgotten due to various reasons, and the optimal treatment period is missed, so that the continuous pain of the body is infringed, and the treatment difficulty is increased; if discovered early in the early stages, the time and money spent, both for the health of the individual and for the illness, will be maximally controlled.
Therefore, in the embodiment of the present application, a system for determining a user status is designed, where the system may include:
the output of smart mobile phone end and wireless network's input electric connection, wireless network's output and backend server's input electric connection, backend server's output and external server's input electric connection.
Wherein, smart mobile phone end includes: the system comprises a light emitting and collecting system, a light receiving and collecting system and a control system, wherein the light emitting and collecting system is provided with necessary components of a spectrometer and comprises one or more components such as a light emitting end, a light collecting end, a mobile phone screen, a camera, data processing and the like; the processing unit comprises one or more data processors, can analyze the data collected by the light emitting and collecting system, and feeds back the analysis result in a mobile phone screen, sound, vibration and other modes; the receiving and transmitting unit is electrically connected with the processing unit and is used for transmitting and receiving information; and the processor is a mobile phone processor, and controls the light emission frequency and processes the data after receiving the data of the light collection system. And the processing result is displayed through a mobile phone screen, sound, vibration and the like.
It should be noted that the human body image of the user may be an image corresponding to a plurality of organs to be detected, for example, including the oral cavity, five sense organs, the face, the skin, and so on.
In one mode, in the embodiment of the present application, the mode of obtaining the human body image of the user may be to periodically remind the user to shoot. Such as the eye image shown in fig. 2.
S102, inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm.
Further, the method and the device can identify the sample characteristics of the organ to be detected included in the user sample image through the state prediction model to obtain the state parameters. Wherein the status parameter corresponds to at least one of a size characteristic, a color characteristic, and a contour characteristic. So that a preset health detection curve for representing the user state can be called subsequently, and the state index of the target user is determined by using the matching result of the state parameter and the health detection curve;
in one method, the health detection curve is constructed by the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwC)+n*sin(dwA);
wherein, X is a horizontal axis of the health detection curve, Y is a vertical axis of the health detection curve, t is used for representing time, n is used for representing a state index, dwA is a color parameter of the organ image to be detected, dwB is a contour parameter of the organ image to be detected, and dwC is a size parameter of the organ image to be detected.
Furthermore, the state prediction model may classify each sample feature in the sample image, and classify sample features belonging to the same class into organs of the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
In one mode, before the application performs feature recognition on the human body image of the user by using the state prediction model, the state prediction model needs to be obtained through training. Specifically, sample human body images and sample pathological images of a certain number of users are acquired, wherein the sample human body images and the sample pathological images contain the same organ to be detected, the sample human body images and the sample pathological images correspond to different physiological states, and the initial decision tree model is trained by utilizing a plurality of sample human body images and sample pathological images until a decision tree model with convergent training is obtained; and continuously optimizing and training the decision tree model through a random forest algorithm and a sample image until a state prediction model is obtained.
The number of sample images is not specifically limited in the present application, and may be, for example, one or more.
For example, when the number of the sample images is 3 and the sample human body images correspond to the oral cavity sample human body images, the present application can obtain 3 oral cavity organ maps in the healthy physiological state and 3 oral cavity organ maps in the pathophysiological state.
In one approach, the present application may utilize a plurality of sample body images and sample pathology images to train an initial decision tree model until a decision tree model with a converged training is obtained
The decision tree model belongs to one of machine learning supervised learning classification algorithms, and is a prediction model; it represents a mapping between object properties and object values. Each node in the tree represents an object and each divergent path represents a possible attribute value, and each leaf node corresponds to the value of the object represented by the path traveled from the root node to the leaf node. The decision tree has only a single output, and if a plurality of outputs are desired, independent decision trees can be established to handle different outputs. The decision tree algorithm includes ID3, C4.5 and CART algorithm, and the common point is that they are all greedy algorithms, and the difference is that the measurement modes are different, for example, ID3 uses information acquisition amount as a measurement mode, and C4.5 uses maximum gain rate as a measurement mode.
Further, in the present application, the initial decision tree model may be trained using a sample training set of the target user until a decision tree model with a training convergence is obtained. After the decision tree model is obtained, the decision tree model can be continuously optimized by using a random forest algorithm, so that a final prediction model for determining the user state is obtained.
It should be further noted that, in the present application, besides the state prediction model obtained by using the random forest algorithm, the prediction model may also be jointly optimized by using other algorithms. For example, neural network algorithms, support vector machine algorithms, k-means algorithms, logistic regression algorithms, na iotave bayes algorithms, and the like may be included.
Specifically, for the logistic regression algorithm, the algorithm which is used more in clinical research at present has great advantages for processing the binary problem, and the logistic regression algorithm uses an activation function (Sigmoid function) on the basis of the traditional linear model, so that the predicted value falls within the range of 0/1, and the risk to the disease or the protective factor can be checked. Although the range of applications of logistic regression is wide, it is a variant of linear regression, and therefore some assumptions of linear regression need to be satisfied, which may cause logistic regression to face problems such as collinearity.
In addition, for the vector machine algorithm, a vector machine (SVM) is a binary classification model, and its basic model is a linear classifier with maximum interval defined in a feature space. The basic idea of the SVM algorithm is to solve a separating hyperplane which can correctly divide the training data set and has the largest geometric interval, for linearly separable data, there may be many hyperplanes, but the hyperplane with the largest geometric interval is unique, and the SVM algorithm aims to find the geometric hyperplane, as shown in fig. 2, where w x + b is 0, that is, the separating hyperplane.
Furthermore, for the Artificial Neural network algorithm, an Artificial Neural Network (ANNs) is an information processing method that simulates neurons in the human brain, and the algorithm is composed of a large number of nodes connected with each other, each node represents a specific output function, and different weights can be obtained for each input signal through learning, similar to human memory, and the following figure describes a simplest Neural network model, which includes three structures, an input layer, a hidden layer and an output layer. The method has the advantages of self-learning function, associative storage, efficient optimal solution searching and the like.
In addition, the naive Bayes algorithm is different from most machine learning algorithms, a decision tree, a neural network and a support vector machine are used for searching the relation between the characteristic x and the output y, and the Bayes algorithm is used for directly searching the joint distribution of the x and the y and then carrying out model prediction by using a Bayes formula. The naive Bayes algorithm can process a plurality of tasks at the same time, and is insensitive to missing data.
In addition, the k-means algorithm (k-means clustering algorithm) is a basic partitioning algorithm with known clustering class numbers. The distance algorithm based on the distance is characterized in that if the distance between two samples is close, the similarity is larger, the algorithm adopts an iterative updating method, each iteration process is carried out in the direction of reducing the target function, and the final clustering result enables the target function to obtain a minimum value, so that a better classification effect can be achieved.
S103, determining the state index of the target user according to the state parameter corresponding to the organ to be detected.
In the application, a user body image shot by a target user by using a camera device on a client can be received, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. By applying the technical scheme of the application, a user can take a picture of a certain specific organ of the user, and the detection system can determine and display the state index of the user for the user pertinence through the image of the organ to be detected of the user and the preset image detection model.
Alternatively, in another embodiment based on the above method of the present application, the organ to be detected corresponds to at least one of the oral cavity, the five sense organs, the face, and the skin.
Optionally, in another embodiment based on the foregoing method of the present application, before the receiving the human body image of the user captured by the target user using the image capturing device on the client, the method further includes:
acquiring sample human body images and sample pathological images of a plurality of users, wherein the sample human body images and the sample pathological images comprise the same organ to be detected, and the sample human body images and the sample pathological images correspond to different physiological states;
training an initial decision tree model by using the plurality of sample human body images and the sample pathological images until a decision tree model with convergent training is obtained;
and continuously optimizing and training the decision tree model with the converged training through a random forest algorithm, the sample human body image and the sample pathological image until the state prediction model is obtained.
Optionally, in another embodiment based on the method of the present application, the inputting the human body image of the user into a preset state prediction model to obtain a state parameter corresponding to the organ to be detected includes:
and performing feature recognition on the organ to be detected of the human body image of the user by using the state prediction model to obtain the state parameter, wherein the state parameter corresponds to at least one of a size feature, a color feature and a contour feature.
Optionally, in another embodiment based on the above method of the present application, the determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected includes:
calling a preset health detection curve for representing the user state;
determining the state index of the target user according to the matching result of the state parameter and the health detection curve;
wherein the health detection curve is constructed by the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwC)+n*sin(dwA);
wherein, X is a horizontal axis of the health detection curve, Y is a vertical axis of the health detection curve, t is used for representing time, n is used for representing the state index, dwA is a color parameter of the organ image to be detected, dwB is a contour parameter of the organ image to be detected, and dwC is a size parameter of the organ image to be detected.
Optionally, in another embodiment based on the above method of the present application, after the inputting the human body image of the user into a preset state prediction model to obtain a state parameter corresponding to the organ to be detected, the method further includes:
acquiring a preset voice detection circulating network;
performing feature recognition on voice data of the target user by utilizing a voice detection circulation network to obtain voice parameters of the target user of the user, wherein the voice parameters correspond to at least one of tone, volume and tone;
and determining the state index of the target user according to the voice parameter and the state parameter corresponding to the organ to be detected.
Further, the method and the device can also obtain the speech parameters of the user again to determine the emotional state of the user, and specifically, the characteristic parameters of the speech can be recognized according to the speech recognition model, so that the tone, the volume and the tone of the user in the speaking process can be determined.
For example, when the user has a bad physical state and a bad mood, the user may have a corresponding low volume. Therefore, whether the target user has a low voice compared with the usual voice or not can be determined according to the voice data parameters of the target user, and the corresponding emotional state of the target user can be determined.
Further, when the user is not in good mood, such as sending a disease such as a cold, the user may have a sharp tone due to nasal obstruction. Therefore, the method and the device can determine whether the voice data parameters of the target user are more frequent and have sharp sound or not according to the voice data parameters of the target user, so as to determine the corresponding user state.
Optionally, in another embodiment based on the above method of the present application, after the determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected, the method further includes:
acquiring a page parameter which is interested by the target user based on historical data, wherein the page parameter corresponds to at least one of the size of a display page, the color of the display page and the outline of the display page;
and determining the display mode preference of the target user by using the page parameters, and displaying the state index in a display mode corresponding to the display mode preference.
Furthermore, after the state index of the target user is determined, the state index can be displayed to the user in a display mode which is preferred to be corresponding to the display mode after the size, the color and the outline of the display page which are interested by the target user are determined
By applying the technical scheme of the application, a user can take a picture of a certain specific organ of the user, and the detection system can determine and display the state index of the user for the user pertinence through the image of the organ to be detected of the user and the preset image detection model. Therefore, the problem that the user cannot timely know the self health state in an early stage in the related technology is solved.
Optionally, in another embodiment of the present application, as shown in fig. 3, the present application further provides a device for determining a user status. Which comprises the following steps:
the receiving module 201 is configured to receive a user human body image shot by a target user by using a camera device on a client, wherein the user human body image comprises an organ to be detected of the target user;
a generating module 202, configured to input the human body image of the user into a preset state prediction model, to obtain a state parameter corresponding to the organ to be detected, where the state prediction model is constructed by a decision tree model and a random forest algorithm;
a determining module 203 configured to determine the status indicator of the target user according to the status parameter corresponding to the organ to be detected.
In the application, a user body image shot by a target user by using a camera device on a client can be received, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. By applying the technical scheme of the application, a user can take a picture of a certain specific organ of the user, and the detection system can determine and display the state index of the user for the user pertinence through the image of the organ to be detected of the user and the preset image detection model. Therefore, the problem that the user cannot timely know the self health state in an early stage in the related technology is solved.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
and constructing an image classification training data set, wherein the training data set comprises at least one piece of sample image data and classification labels corresponding to the sample image data.
And training to obtain the teacher model by using the training data set.
Alternatively, in another embodiment of the present application, the organ to be detected corresponds to at least one of the oral cavity, the five sense organs, the face and the skin.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
acquiring sample human body images and sample pathological images of a plurality of users, wherein the sample human body images and the sample pathological images comprise the same organ to be detected, and the sample human body images and the sample pathological images correspond to different physiological states;
training an initial decision tree model by using the plurality of sample human body images and the sample pathological images until a decision tree model with convergent training is obtained;
and continuously optimizing and training the decision tree model with the converged training through a random forest algorithm, the sample human body image and the sample pathological image until the state prediction model is obtained.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
and performing feature recognition on the organ to be detected of the human body image of the user by using the state prediction model to obtain the state parameter, wherein the state parameter corresponds to at least one of a size feature, a color feature and a contour feature.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
calling a preset health detection curve for representing the user state;
determining the state index of the target user according to the matching result of the state parameter and the health detection curve;
wherein the health detection curve is constructed by the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwC)+n*sin(dwA);
wherein, X is a horizontal axis of the health detection curve, Y is a vertical axis of the health detection curve, t is used for representing time, n is used for representing the state index, dwA is a color parameter of the organ image to be detected, dwB is a contour parameter of the organ image to be detected, and dwC is a size parameter of the organ image to be detected.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
acquiring a preset voice detection circulating network;
performing feature recognition on voice data of the target user by utilizing a voice detection circulation network to obtain voice parameters of the target user of the user, wherein the voice parameters correspond to at least one of tone, volume and tone;
and determining the state index of the target user according to the voice parameter and the state parameter corresponding to the organ to be detected.
In another embodiment of the present application, the detecting module 201 is configured to perform the steps including:
acquiring a page parameter which is interested by the target user based on historical data, wherein the page parameter corresponds to at least one of the size of a display page, the color of the display page and the outline of the display page;
and determining the display mode preference of the target user by using the page parameters, and displaying the state index in a display mode corresponding to the display mode preference.
Fig. 4 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, such as a memory, including instructions executable by a processor of an electronic device to perform the method of determining a user state described above, the method comprising: receiving a user body image shot by a target user by using a camera device on a client, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. Optionally, the instructions may also be executable by a processor of the electronic device to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product including one or more instructions executable by a processor of an electronic device to perform the above method of determining a user status, the method comprising: receiving a user body image shot by a target user by using a camera device on a client, wherein the user body image comprises an organ to be detected of the target user; inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm; and determining the state index of the target user according to the state parameter corresponding to the organ to be detected. Optionally, the instructions may also be executable by a processor of an electronic device to perform other steps involved in the exemplary embodiments described above.
Those skilled in the art will appreciate that the schematic diagram 4 is merely an example of the electronic device 300 and does not constitute a limitation of the electronic device 300 and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device 300 may also include input-output devices, network access devices, buses, etc.
The Processor 302 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor 302 may be any conventional processor or the like, and the processor 302 is the control center of the electronic device 300 and connects the various parts of the entire electronic device 300 using various interfaces and lines.
The memory 301 may be used to store computer readable instructions and the processor 302 may implement various functions of the electronic device 300 by executing or executing computer readable instructions or modules stored in the memory 301 and by invoking data stored in the memory 301. The memory 301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device 300, and the like. In addition, the Memory 301 may include a hard disk, a Memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Memory Card (Flash Card), at least one disk storage device, a Flash Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), or other non-volatile/volatile storage devices.
The modules integrated by the electronic device 300 may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by hardware related to computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the method embodiments may be implemented.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of determining a user state, comprising:
receiving a user body image shot by a target user by using a camera device on a client, wherein the user body image comprises an organ to be detected of the target user;
inputting the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed by a decision tree model and a random forest algorithm;
and determining the state index of the target user according to the state parameter corresponding to the organ to be detected.
2. The method of claim 1, wherein the organ to be detected corresponds to at least one of the oral cavity, five sense organs, the face, and the skin.
3. The method of claim 1, wherein prior to receiving the user's body image captured by the target user using the camera on the client, further comprising:
acquiring sample human body images and sample pathological images of a plurality of users, wherein the sample human body images and the sample pathological images comprise the same organ to be detected, and the sample human body images and the sample pathological images correspond to different physiological states;
training an initial decision tree model by using the plurality of sample human body images and the sample pathological images until a decision tree model with convergent training is obtained;
and continuously optimizing and training the decision tree model with the converged training through a random forest algorithm, the sample human body image and the sample pathological image until the state prediction model is obtained.
4. The method of claim 1, wherein the inputting the human body image of the user into a preset state prediction model to obtain a state parameter corresponding to the organ to be detected comprises:
and performing feature recognition on the organ to be detected of the human body image of the user by using the state prediction model to obtain the state parameter, wherein the state parameter corresponds to at least one of a size feature, a color feature and a contour feature.
5. The method of claim 1, wherein determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected comprises:
calling a preset health detection curve for representing the user state;
determining the state index of the target user according to the matching result of the state parameter and the health detection curve;
wherein the health detection curve is constructed by the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwC)+n*sin(dwA);
wherein, X is a horizontal axis of the health detection curve, Y is a vertical axis of the health detection curve, t is used for representing time, n is used for representing the state index, dwA is a color parameter of the organ image to be detected, dwB is a contour parameter of the organ image to be detected, and dwC is a size parameter of the organ image to be detected.
6. The method of claim 1, wherein after the inputting the human body image of the user into a preset state prediction model to obtain the state parameters corresponding to the organ to be detected, the method further comprises:
acquiring a preset voice detection circulating network;
performing feature recognition on voice data of the target user by utilizing a voice detection circulation network to obtain voice parameters of the target user of the user, wherein the voice parameters correspond to at least one of tone, volume and tone;
and determining the state index of the target user according to the voice parameter and the state parameter corresponding to the organ to be detected.
7. The method of claim 1, further comprising, after determining the status indicator of the target user according to the status parameter corresponding to the organ to be detected:
acquiring a page parameter which is interested by the target user based on historical data, wherein the page parameter corresponds to at least one of the size of a display page, the color of the display page and the outline of the display page;
and determining the display mode preference of the target user by using the page parameters, and displaying the state index in a display mode corresponding to the display mode preference.
8. An apparatus for determining a user state, comprising:
the receiving module is configured to receive a user human body image shot by a target user by using a camera device on a client, wherein the user human body image comprises an organ to be detected of the target user;
the generation module is configured to input the human body image of the user into a preset state prediction model to obtain state parameters corresponding to the organ to be detected, wherein the state prediction model is constructed through a decision tree model and a random forest algorithm;
the determining module is configured to determine the state index of the target user according to the state parameter corresponding to the organ to be detected.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for executing the executable instructions with the memory to perform the operations of the method of determining a user state of any of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of determining a user state of any of claims 1-7.
CN202210414369.1A 2022-04-20 2022-04-20 Method, device, electronic equipment and medium for determining user state Pending CN114842972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210414369.1A CN114842972A (en) 2022-04-20 2022-04-20 Method, device, electronic equipment and medium for determining user state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210414369.1A CN114842972A (en) 2022-04-20 2022-04-20 Method, device, electronic equipment and medium for determining user state

Publications (1)

Publication Number Publication Date
CN114842972A true CN114842972A (en) 2022-08-02

Family

ID=82566719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210414369.1A Pending CN114842972A (en) 2022-04-20 2022-04-20 Method, device, electronic equipment and medium for determining user state

Country Status (1)

Country Link
CN (1) CN114842972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115701878A (en) * 2022-10-14 2023-02-14 首都医科大学附属北京友谊医院 Eye perfusion state prediction method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115701878A (en) * 2022-10-14 2023-02-14 首都医科大学附属北京友谊医院 Eye perfusion state prediction method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Yadav et al. Real-time Yoga recognition using deep learning
Golany et al. SimGANs: Simulator-based generative adversarial networks for ECG synthesis to improve deep ECG classification
CN105393252B (en) Physiological data collection and analysis
Xie et al. Scut-fbp: A benchmark dataset for facial beauty perception
CN109875579A (en) Emotional health management system and emotional health management method
CN109475294A (en) For treat phrenoblabia movement and wearable video capture and feedback platform
Li et al. On improving the accuracy with auto-encoder on conjunctivitis
Nayak et al. Firefly algorithm in biomedical and health care: advances, issues and challenges
Bu Human motion gesture recognition algorithm in video based on convolutional neural features of training images
Li et al. Local deep field for electrocardiogram beat classification
WO2021031817A1 (en) Emotion recognition method and device, computer device, and storage medium
Oyedotun et al. Prototype-incorporated emotional neural network
Ding et al. Multiple lesions detection of fundus images based on convolution neural network algorithm with improved SFLA
TWI829944B (en) Avatar facial expression generating system and method of avatar facial expression generation
Chen et al. Patient emotion recognition in human computer interaction system based on machine learning method and interactive design theory
Dadiz et al. Detecting depression in videos using uniformed local binary pattern on facial features
CN115410254A (en) Multi-feature expression recognition method based on deep learning
CN116091432A (en) Quality control method and device for medical endoscopy and computer equipment
CN112052874A (en) Physiological data classification method and system based on generation countermeasure network
Bandyopadhyay et al. Machine learning and deep learning integration for skin diseases prediction
CN114842972A (en) Method, device, electronic equipment and medium for determining user state
CN114191665A (en) Method and device for classifying man-machine asynchronous phenomena in mechanical ventilation process
Dutt et al. Support Vector in Healthcare Using SVM/PSO in Various Domains: A Review
CN116484916A (en) Chicken health state detection method and method for building detection model thereof
Radhika et al. Stress detection using CNN fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination