CN114219772A - Method, device, terminal equipment and storage medium for predicting health parameters - Google Patents

Method, device, terminal equipment and storage medium for predicting health parameters Download PDF

Info

Publication number
CN114219772A
CN114219772A CN202111438525.XA CN202111438525A CN114219772A CN 114219772 A CN114219772 A CN 114219772A CN 202111438525 A CN202111438525 A CN 202111438525A CN 114219772 A CN114219772 A CN 114219772A
Authority
CN
China
Prior art keywords
health parameter
target
skin
target object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111438525.XA
Other languages
Chinese (zh)
Inventor
徐�明
曾光
曹玥
宋咏君
刘奇玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kesi Chuangdong Technology Co ltd
Original Assignee
Shenzhen Kesi Chuangdong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kesi Chuangdong Technology Co ltd filed Critical Shenzhen Kesi Chuangdong Technology Co ltd
Priority to CN202111438525.XA priority Critical patent/CN114219772A/en
Publication of CN114219772A publication Critical patent/CN114219772A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides a method and a device for predicting health parameters, terminal equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring a target skin image of a target object; obtaining a first health parameter predicted value of the target object according to the target skin image; inputting the first health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model; wherein the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set. Through the scheme provided by the application, the convenience in acquiring the human health condition information can be improved, and privacy exposure can be avoided.

Description

Method, device, terminal equipment and storage medium for predicting health parameters
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a method, an apparatus, a terminal device, and a storage medium for predicting health parameters.
Background
With the continuous progress of modern medical means and the rapid development of mobile medical treatment, the detection of human health condition based on portable equipment has become a hot spot. For example, the user wears additional internet of things devices (e.g., smart band, smart watch, etc.) to detect the health data of the user.
However, the current health data detection mode is too dependent on interaction between the terminal and the internet of things device, requires the user to wear the wearable electronic device, is not beneficial to acquiring the health data of the user, and sometimes easily exposes the privacy of the user.
Disclosure of Invention
The embodiment of the application provides a method, a device, terminal equipment and a storage medium for predicting health parameters, so as to solve the problems that how to improve the convenience when obtaining the health condition information of a human body and avoid privacy exposure at present.
In a first aspect, an embodiment of the present application provides a method for predicting a health parameter, including:
acquiring a target skin image of a target object;
obtaining a first health parameter predicted value of the target object according to the target skin image;
inputting the first health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model; the first health parameter prediction model is obtained by taking a first sample health parameter value corresponding to a first sample skin image as a training set.
According to the method for predicting the health parameters, the target skin image of the target object is obtained, so that the skin condition of the target object can be known on the basis of the target skin image on the premise of protecting the privacy of the target object, the skin condition is used as a sample for representing the health condition of the target object, the target skin image is processed, the first health parameter predicted value of the target object is obtained, and the first type of health condition of the target object can be conveniently known on the basis of the first health parameter predicted value. Then, the first health parameter predicted value is input into the trained first health parameter prediction model for processing, so that on the basis of the first health parameter predicted value, a second health parameter predicted value of the target object is further output by the first health parameter prediction model, and further, the second type health condition of the target object can be conveniently known on the basis of the second health parameter predicted value.
In a second aspect, an embodiment of the present application provides an apparatus for predicting a health parameter, including:
the acquisition module is used for acquiring a target skin image of a target object;
the first prediction module is used for obtaining a first health parameter prediction value of the target object according to the target skin image;
the second prediction module is used for inputting the first health parameter prediction value into a trained first health parameter prediction model for processing, and outputting a second health parameter prediction value of the target object through the first health parameter prediction model; wherein the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for predicting health parameters when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method for predicting health parameters.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method for predicting health parameters according to the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for predicting a health parameter according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a specific implementation of step S11 of the method for predicting health parameters according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a specific implementation of step S21 of the method for predicting health parameters according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an apparatus for predicting a health parameter according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details.
As used in this specification and the appended claims, the term "if" may be interpreted in context to mean "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a method for predicting health parameters according to an embodiment of the present disclosure. In this embodiment, the method for predicting health parameters may be applied to terminal devices such as a tablet computer, a television, a vehicle, a desktop computer, a notebook, a palmtop computer, a cloud server, an ultra-mobile personal computer (UMPC), and a netbook. The embodiment of the present application does not set any limit to the specific type of the terminal device.
Currently, in order to know the health condition of a target object, a user generally wears additional internet of things devices (such as a smart band, a smart watch, and the like) so as to detect the health data of the user. However, for the current health data detection mode, since the detection is completed by relying on the interaction between the smartphone terminal and the internet of things device and the wearable electronic device is required to be worn by the user, the equipment cost is additionally increased. Moreover, in one scenario, if the target object is an elderly person or does not use the internet of things device worn, the health data detection method cannot be smoothly performed.
As shown in fig. 1, a method for predicting health parameters provided by an embodiment of the present application includes the following steps:
s11: a target skin image of a target object is acquired.
As an example of the present application, the target object is an object whose health condition is to be known. For example, an elderly person a who lives in a cell in a certain cell.
The target skin image of the target object refers to a skin image of a certain part on the body. For example, an image corresponding to an arm or forehead of a human body.
It is to be understood that the target skin image may be a partial image extracted from a captured image containing the target object. For example, an image including information on a body part such as the trunk and the limbs of the human nail is extracted from the image, and a skin image of the forehead of the human nail is extracted from the image.
In one embodiment, the designated region refers to a body region that is not uniquely capable of corresponding to the target object, since the target object may have privacy concerns. For example, since the identity information of the target object can be known through the image of the whole face and is relatively sensitive, in order to avoid leakage of the identity information of the target object, the designated part is not the face and the fingerprint, but the body part of the arm, the forehead, the chin and the like which are exposed outside and cannot be uniquely pointed to the target object.
The target skin image refers to a skin image that can be used to evaluate the health condition of the target subject. For example, skin images of the arms and legs of elderly a.
In this embodiment, since the health status of each target object is different, the skin can be presented to the outside through the skin change correspondence, for example, when blood flows in a blood vessel, the skin color of a person slightly changes according to the flow of the blood, and based on this, the skin image capable of recording the target object can be obtained to know the blood change, and further know the health status of the target object. Therefore, in order to avoid privacy exposure of the target object and conveniently know the health condition of the target object, the target skin video of the target object is acquired in the application, so that the target skin image contained in the target skin video is processed to obtain the health parameter predicted value of the target object.
As to when the target skin image of the target object is acquired, the following scenes may be included, but not limited thereto.
Scene 1: when the signal intensity of the Bluetooth signal of the contact device corresponding to the target object is detected to reach a preset value and the contact device is connected with the terminal device, the target skin image of the target object is automatically acquired.
For example, the elderly a live in a house with a room number of 10086, and the elderly a is detected to buy vegetables outdoors at 8 am, when the signal intensity of the bluetooth signal of the mobile phone of the elderly a reaches a preset value and the mobile phone is automatically connected with the terminal device, the terminal device automatically acquires the target skin image of the target object through the image acquisition device.
Scene 2: and detecting the moving track of the target object, and when the distance between the detected target object and the detection point is less than or equal to the preset distance, sending a wake-up instruction to the image acquisition equipment by the terminal equipment so as to acquire a target skin image of the target object through the image acquisition equipment and feeding the target skin image back to the terminal equipment.
For example, when the elderly people a live in a house with a room number of 10086 and the elderly people a are detected to buy vegetables at 8 am, and the distance between the elderly people a and the house with the room number of 10086 is gradually reduced and is smaller than or equal to the preset distance, the terminal device sends a wake-up instruction to the image acquisition device in the house so as to acquire the target skin image of the designated part of the elderly people a through the image acquisition device and feed the target skin image back to the terminal device.
Scene 3: when the target object is detected to be in the target scene, a target skin image of the target object is acquired.
For example, when the target object is detected to be sitting in the vehicle and in a driving state, the target skin image of the designated part of the target object is acquired, so that the physiological index of the target object is detected through the target skin image, the activity degree of sympathetic parasympathetic nerves is judged, and the forward detection is realized, so that the target object is not detected to be in a fatigue state when the target object is continuously yawned and blinked.
Or, for example, when the target object is detected to be in a fitness scene, the target skin image of the designated part of the target object is acquired so as to continuously monitor the physiological indexes in real time, and when the physiological indexes are abnormal, the control signal fitness equipment is sent and information is pushed to the target object in fitness.
It is understood that the target scene includes, but is not limited to, one or more of a living room scene, a fitness scene, a sleep scene, a driving scene, a scene using a mobile device.
Scene 4: when it is detected that a target object is present at the target position, a target skin image of the target object is acquired.
For example, in order to understand the health condition of the target object and push the information to be pushed corresponding to the health condition of the target object in a targeted manner, when the elderly a is detected to sit at the target position on the bus, the target skin image of the forehead of the elderly a is acquired, so that the health condition of the elderly a can be conveniently understood according to the target skin image, and the health advice of the elderly a for the health condition can be pushed on the screen corresponding to the position of the elderly a.
It can be understood that, when detecting whether the target object exists at the target position, the target detection may be performed on the image by acquiring the image corresponding to the target position, so as to determine whether the target object exists at the target position according to the detection result. Alternatively, when the target object is seated at the target position, whether the target object is present at the target position is determined by a signal fed back from a sensor at the target position, for example, by a pressure sensor at the target position, to determine whether the target object is present at the target position. Or when the target object is seated at the target position, whether an instruction input by the target object through an external device is acquired or not is detected, so that whether the target object exists at the target position or not is known through the instruction.
It should be understood that the manner of detecting whether the target object is present at the target location includes, but is not limited to, the manner described above.
It is to be understood that the target skin image of the target object acquired may be one or more. For example, a video of a predetermined time period of a designated area is captured by an image capturing device, and since the video is a set of a series of still images, the video includes a plurality of target skin images meeting the requirements.
In the application, a skin image of a target object is acquired by an image acquisition device, and then the terminal device acquires the skin image of the target object sent by the image acquisition device from the image acquisition device. It is understood that the image acquisition device sends to the terminal device a separate skin image or a video composed of a series of consecutive skin images.
The image acquisition equipment comprises one or more of other equipment with a photographing function, such as a camera, a video camera, a scanner, a mobile phone, a tablet personal computer and the like, and also comprises a video acquisition card.
In one embodiment, skin damage diagnosis information of a target object is acquired, and a target skin image of the target object is acquired according to the skin damage diagnosis information.
As an example of the present application, skin injury diagnostic information is used to describe an area of skin where a target subject is injured by an accident, resulting in healing after the skin injury.
In this embodiment, since the target object may have an accident, which may cause skin damage, so that the skin cannot better show the change of blood vessels under the skin, and further cannot better obtain the health condition of the target object, the skin damage diagnosis information of the target object is first obtained to know the skin damage part of the target object through the skin damage diagnosis information, and then the target skin image of the target object is obtained for other human body parts except the skin damage part, so that the obtained skin image does not affect the accuracy of the obtained health condition of the target object due to the damage condition of the skin.
In one embodiment, the information of the blocking object of the target object is obtained, and the target skin image of the target object is obtained according to the distribution information of the blocking object.
In the present embodiment, the distribution information of the obstruction is used to describe the distribution position of the obstruction that obstructs the skin of the target object. For example, since the distribution information of the short-sleeved jacket is an area where the skin of the target object is blocked, the skin of the area other than the distribution information can be acquired, and if the upper half of the target object is short-sleeved, the skin image of the small arm of the target object can be acquired.
It will be appreciated that the type of shade may include, but is not limited to, one or more of clothing, masks, glasses, hair.
In an embodiment, after the initial image of the target object is acquired, if it is detected that no set body part exists in the initial image, human body posture detection is performed on the initial image to obtain a target skin image.
It can be understood that when the human posture is detected on the initial image, the block detection is performed along the human skeleton.
For example, a 4-second video stream is obtained by shooting with the camera 1, where the video stream includes 10 initial images of the target object, and if it is detected that there is no complete skin area of the set body part in the initial image, human body posture detection is performed on the initial images to detect torso parts of the target object, such as the head, the torso, the left upper arm, and the left lower arm, so as to determine whether there is a skin patch image with an exposed skin area greater than a preset area threshold at each part. And if the skin area with the exposed skin area larger than the preset area threshold exists, taking the skin image as a target skin image.
It will be appreciated that since each region may have a skin patch image with an exposed skin area greater than a preset area threshold in the corresponding image region in the initial image, there may be a target skin image corresponding to each of the plurality of regions.
S12: and obtaining a first health parameter predicted value of the target object according to the target skin image.
As an example of the present application, the first health parameter prediction value is used to describe a first type of health condition of the target subject.
The first type of physiological health condition may include one or more of skin state, heart rate variability, respiration value, blood oxygen value, PPG waveform (photoplethysmography), and information obtained by directly observing the content of skin appearance and subcutaneous capillary vessel volume change.
In this embodiment, since the target skin image is the skin of the target object, and the information of the target object can be seen from the skin, for example, the approximate age or gender of the target object can be known from the wrinkle state of the skin, after the target skin image of the target object is acquired, the target skin image can be processed to obtain the first health parameter predicted value capable of representing the first kind of health condition of the target object, so as to provide an information reference for understanding the health condition of the target object.
In application, because a large number of blood vessels are distributed under the skin, and along with the respiratory motion of a target object, the blood volume in the blood vessels also changes along with the respiratory motion, so that the blood vessels can contract or relax regularly, and further, when the blood vessels are irradiated by external light, the color displayed at each time point at the positions corresponding to the blood vessels also changes, so that after a target skin image of the target object is obtained, information of the positions corresponding to the blood vessels recorded in the target skin image, such as the change of the color depth and the change of the size of the blood vessels, can be obtained according to the change of the skin image, the blood flow condition in the blood vessels of the target object can be obtained, the respiratory law, the heart rate condition, the PPG waveform, the blood oxygen and the like of the target object can be obtained, and further, the first health parameter prediction value of the target object can be obtained.
It can be understood that, when the first health parameter prediction value is obtained according to the target skin image, data such as a breathing rule, a heart rate, blood oxygen, a PPG waveform, and the like are obtained by processing based on skin conditions at different time points, so that the number of the target skin images may be one or more, so as to obtain the first health parameter prediction value by combining the target skin images corresponding to the different time points.
Or inputting the target skin image into a fourth health parameter prediction model for processing to obtain a first health parameter prediction value.
It can be understood that the fourth health parameter prediction model sample skin image is trained. In some embodiments, the sample skin image is labeled differently according to different health detection requirements, so that different health parameters can be obtained by processing the target skin image through the fourth health parameter prediction model.
For example, in order to know the skin state of the target object, the target skin image of the target object is input to the fourth health parameter prediction model for processing, and the first health parameter prediction value indicating that the skin of the target object is lack of water is obtained, that is, the target object may drink water insufficiently, resulting in dry, fine-grained and lusterless skin. Alternatively, information indicating that the skin type of the target object is black is obtained.
For example, in a specific implementation scenario, the first health parameter prediction value is a heart rate value, and the target skin image includes a plurality of images, and each target skin image corresponds to a blood vessel condition under the skin at a time, so that, in combination with the blood vessel conditions under the skin at each time respectively corresponding to the plurality of target skin images, contraction and relaxation rules of the blood vessel can be reasonably inferred, and then a heart rate change condition of the target object is inferred, that is, the heart rate value of the target object is obtained.
In one embodiment, the target skin images may be multiple and correspond to different body parts of the target object, and therefore, the multiple target skin images may be images captured by one image capturing device or images captured by a plurality of image capturing devices respectively.
Furthermore, when the target skin images are multiple and correspond to different body parts of the target object, the first health parameter predicted values of the multiple target objects are determined, and the first health parameter predicted values are determined to be different by the target skin images corresponding to the different body parts because image information contained in the target skin images corresponding to the different body parts is influenced by the environment or the motion of the target object.
Illustratively, a multi-frame image a of the forehead is shot through the camera 1, a multi-frame image b of the forehead is shot through the camera 2, the heart rate value is determined to be 70 times/minute based on the multi-frame image a, and the heart rate value is determined to be 65 times/minute based on the multi-frame image b.
S13: inputting the first health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model; the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set.
As an example of the present application, the second health parameter prediction value is used to describe a second type of health condition of the target subject.
The second type of health condition may include a health parameter of a long-term disease type and/or a health parameter of a long-term index type. The health parameters of the long-term disease category can comprise one or more of blood pressure, cardiovascular risk, blood sugar level, atrial fibrillation, diabetes, blood alcohol concentration, depression characterization parameters, cardiovascular risk and the like, and the health parameters of the long-term index category can comprise one or more of pressure, recovery capacity, exercise capacity, body age and the like.
In this embodiment, since relatively hidden health parameters such as blood pressure are not easy to obtain when skin images are processed by techniques such as machine vision, the first health parameter prediction value is input into the trained first health parameter prediction model for processing, so as to obtain relatively hidden health parameters capable of describing the target object, thereby facilitating more specific understanding of the health condition of the target object, and providing a reference for pushing more accurate health attention information to the target object.
The terminal equipment is pre-stored with a pre-trained first health parameter prediction model. The first health parameter prediction model is obtained by training an initial first health parameter prediction model based on a training set by using a deep learning algorithm.
It can be understood that the first health parameter prediction model may be trained in advance by the terminal device, or a file corresponding to the first health parameter prediction model may be transplanted to the terminal device after being trained in advance by another device. That is, the executing agent for training the first health parameter prediction model may be the same as or different from the executing agent for video quality evaluation using the first health parameter prediction model. For example, when the initial first health parameter prediction model is trained by other equipment, after the initial first health parameter prediction model is trained by other equipment, the model parameters of the initial first health parameter prediction model are fixed, and a file corresponding to the first health parameter prediction model is obtained. And then the file is transplanted to the terminal device.
In application, aiming at different detection requirements, the plurality of first health parameter prediction models can be trained respectively based on the plurality of training sets, so that the plurality of first health parameter prediction models are obtained through training. For example, in order to detect the skin state of the forehead, a corresponding model is trained based on sample health parameter values related to the skin state and sample skin images as a training set, so that the trained model can be used for predicting the health state of the skin.
It will be appreciated that the first health parameter prediction value input into the trained first health parameter prediction model for processing may be one or more, for example the first health parameter prediction value input into the first health parameter prediction model may comprise one or more of a heart rate value, heart rate variability, blood oxygen value, respiration value, PPG waveform.
For example, a target skin image of an arm and a target skin image of a forehead of the target object are acquired respectively, one first health parameter predicted value, such as a heart rate value and a respiration value, is obtained according to the target skin image of the arm, and another first health parameter predicted value, such as another heart rate value and a respiration value, is obtained according to the target skin image of the forehead. And inputting the heart rate value and the respiration value which are calculated according to the target skin image of the arm, the heart rate value and the respiration value which are calculated according to the target skin image of the forehead, and other parameters such as heart rate variability, PPG waveform, age and gender into the first health parameter prediction model for processing to obtain a blood pressure value.
In an embodiment, the target skin image comprises a plurality of skin areas, for example a forehead area and an arm area, for determining the first health parameter prediction value.
The process of determining a weight value for the first health parameter predictor comprises: determining a plurality of skin areas in the target skin image, determining the signal-to-noise ratio of each skin area, determining the weight value of each target skin image based on the signal-to-noise ratio of the target skin image, and taking the weight value as the weight value of the first health parameter predicted value corresponding to the target skin image.
In an embodiment of the present application, the first health parameter prediction value and the target skin image are input into a trained second health parameter prediction model for processing, and a fourth health parameter prediction value of the target object is output through the second health parameter prediction model, where the second health parameter prediction model is obtained by training a second sample skin image and a corresponding second sample health parameter value as a training set.
As an example of the present application, the fourth health parameter prediction value is used to describe a third class of health condition of the target pair.
Wherein the third type of health condition may refer to a real-time psychological index of the target subject. For example, it is known through a third type of health condition whether the target subject is currently stressed, focused, relaxed, drowsy, and so on.
The second sample health parameter value refers to the same type of health parameter corresponding to the first health parameter prediction value. For example, where the first health parameter predicted value is a heart rate value, the second sample health parameter value is also a heart rate value.
It can be understood that, since the psychological index not only appears outside the target object, but also has a corresponding relationship with vital sign information of the target object, such as heart rate, blood oxygen, and the like, when predicting the real-time psychological index of the target object, the first health parameter prediction value, the target skin image, and the second health parameter prediction model need to be combined to process and obtain the fourth health parameter prediction value, so as to represent the psychological condition of the target object when being monitored through the fourth health parameter prediction value, so as to push information to the target object in a targeted manner.
For example, in a scenario where a target object drives a vehicle, in order to effectively improve driving safety, by monitoring a physiological index of the target object in real time, it may be effectively prevented that the vehicle loses control when a problem physiological problem occurs in a driver, for example, by acquiring a target skin image of the target object, obtaining a first health parameter predicted value, inputting the first health parameter predicted value and the target skin image into a trained second health parameter prediction model, outputting a fourth health parameter predicted value by the second health parameter prediction model, which can be used for describing whether the target object is in a drowsy state, so as to advance a checking time of fatigue driving, so that it is not necessary to determine that the target object is in a fatigue driving state when a yawning of the target object is detected, and by using the scheme of this embodiment, it is possible to determine an activity degree of sympathetic parasympathetic nerves of the target object, the detection of the advancement of the activity is achieved.
In another embodiment, a target skin image of a target object acquired by more than or equal to one camera shooting acquisition device is acquired, and a plurality of first health parameter predicted values are obtained according to the target skin image acquired by different camera shooting acquisition devices; and determining the weight value of each first health parameter predicted value according to the signal-to-noise ratio of the target skin image corresponding to each first health parameter predicted value.
And obtaining a fifth health parameter predicted value according to each first health parameter predicted value and the corresponding weight value.
And inputting the fifth health parameter predicted value and the target skin image into a trained second health parameter prediction model for processing, and outputting a fourth health parameter predicted value of the target object through the second health parameter prediction model.
In this embodiment, the third health parameter prediction value and the fifth health parameter prediction value may be the same or different.
With reference to fig. 2, in an embodiment of the present application, a specific implementation of acquiring a target skin image of a target object includes:
s21: a plurality of skin images of a target subject are acquired.
S22: and selecting a skin image containing the characteristics of the target image from the plurality of skin images as the target skin image.
In the present embodiment, the target image feature is an image feature determined according to actual needs.
In the present embodiment, in order to better understand the health condition of the target object, the obtained multiple skin images of the target object are continuously obtained images, such as 10 continuous skin images collected by the image collecting device. However, since the target object may move, so that the image capturing apparatus cannot capture the skin image containing the target image feature in real time, it is necessary to screen the plurality of captured skin images, that is, to select the skin image containing the target image feature from the plurality of skin images as the target skin image with reference to the target image feature.
It can be understood that, in an application, the plurality of skin images of the target object acquired by the terminal device through the preset image acquisition device may be a plurality of skin images acquired through one image pickup device, or a plurality of skin images acquired by a plurality of image pickup devices. After partial images of a part of the target object are acquired by a plurality of image capturing devices, the partial regions acquired by each image capturing device are combined to obtain one skin image, and a plurality of skin images are obtained by analogy.
For example, a camera 1 and a camera 2 are provided in the site a, and 10 skin images of the forehead of the first aged person are acquired by the camera 1 and 10 skin images of the forehead of the first aged person are acquired by the camera 2. Alternatively, at the point a, the area image 1 of the forehead acquired by the camera 1 and the area image 2 of the forehead acquired by the camera 2 are integrated by the difference image 1 and the area image 2 to obtain the skin image of the forehead.
In application, the target image feature in the skin image may be determined based on a feature selection algorithm.
In an embodiment, when the number of skin images including the target image feature selected from the plurality of skin images is greater than a preset threshold, the selected skin image including the target image feature is used as the target skin image.
With reference to fig. 3, as a possible implementation manner of this embodiment, selecting a skin image including a target image feature from a plurality of skin images includes:
s31: a first skin image containing the target image feature is selected from the plurality of skin images.
S32: first feature pixel points are selected from an image region of a target image feature contained in the first skin image.
S33: and determining the first skin image and the second skin image as target skin images, wherein the second skin image is a skin image of a second characteristic pixel point corresponding to the first characteristic pixel point in the plurality of skin images through a visual tracking algorithm.
As an example of the present application, the first feature pixel point refers to a pixel point in the skin image for identifying a feature of the target image. For example, pixel points corresponding to the arm skin area in the skin image.
In this embodiment, in order to determine whether the skin image acquired by the image acquisition device can better describe the designated region, a first skin image including the feature of the target image is selected from the multiple skin images, a first feature pixel point is selected from an image region of the feature of the target image included in the first skin image, the first feature pixel point corresponding to the first skin image is used as a reference, the condition of the first feature pixel point in other skin images is tracked by a visual tracking algorithm, a skin image in which a second feature pixel point corresponding to the first feature pixel point can be obtained by the visual tracking algorithm is used as a second skin image, and the first skin image and the second skin image are determined as the target skin image.
In an embodiment of the application, a target skin image of a target object acquired by more than or equal to one camera shooting acquisition device is acquired, and a plurality of first health parameter predicted values are obtained according to the target skin image acquired by different camera shooting acquisition devices.
In this embodiment, when a target object enters an image capturing area of each image capturing apparatus, an image containing the target object may be captured by each image capturing apparatus disposed in the image capturing area.
As an example, the image capturing area a is correspondingly provided with 2 cameras, namely a camera 1 and a camera 2, for capturing the skin image of the target object. A plurality of target skin images of the forehead containing the target object are obtained through shooting by the camera 1, and a plurality of target skin images of the forehead containing the target object are obtained through shooting by the camera 2. Because the shooting angles corresponding to the camera 1 and the camera 2 are different, the shot images have difference, and a plurality of first health parameter predicted values can be obtained according to target skin images collected by different shooting collection devices. For example, the brightness degrees of the skin of the forehead of the target object obtained by shooting are different due to different shooting angles corresponding to the camera 1 and the camera 2, and the color of the skin of the forehead of the target object obtained by shooting is different, so that the color of the blood vessel reflected by the skin image is different, and the predicted value of the first health parameter obtained by processing the target skin image obtained by shooting through each camera is different, for example, the heart rate of the target object is determined to be 70 times/minute based on the target skin image obtained by shooting through the camera 1, and the heart rate of the target object is determined to be 65 times/minute based on the target skin image obtained by shooting through the camera 1.
It is understood that, in order to calculate the second health parameter prediction value based on the first health parameter prediction value, when there are a plurality of first health parameter prediction values, a weighted average calculation may be performed on the plurality of first health parameter prediction values, and the calculated average value may be input to the trained first health parameter prediction model for processing. Or, determining a weight value of each first health parameter prediction value based on image information, such as a signal-to-noise ratio, of the target skin image corresponding to each first health parameter prediction value, and then determining a third health parameter prediction value for inputting the first health parameter prediction model according to each first health parameter prediction value and the corresponding weight value, so that the first health parameter prediction model outputs a second health parameter prediction value.
In an embodiment, inputting the first health parameter prediction value into a trained first health parameter prediction model for processing, and outputting the second health parameter prediction value of the target subject through the first health parameter prediction model includes:
and determining the weight value of each first health parameter predicted value according to the signal-to-noise ratio of the target skin image corresponding to each first health parameter predicted value.
And obtaining a third health parameter predicted value according to each first health parameter predicted value and the corresponding weight value.
Inputting the third health parameter predicted value and the corresponding weight value into the trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model.
As an example of the present application, the signal-to-noise ratio is used to describe the sharpness of an image captured by a camera capture device.
In this embodiment, in order to better reflect the first type of physiological health condition of the target object and obtain the second health parameter prediction value through more accurate calculation by the first health parameter prediction model, the weight value of each first health parameter prediction value is determined according to the signal-to-noise ratio of the target skin image corresponding to each first health parameter prediction value, so as to obtain a parameter for integrally representing the first type of physiological health condition of the target object according to each first health parameter prediction value and the weight value of each first health parameter prediction value, and the first health parameter prediction model processes the parameter and outputs the second health parameter prediction value of the target object.
Illustratively, the first health parameter predicted value is a heart rate predicted value, the signal-to-noise ratio of a target skin image including the forehead of the target subject captured by the camera 1 is 55db, the heart rate predicted value of the target subject is 70 times/min, the signal-to-noise ratio of a target skin image including the forehead of the target subject captured by the camera 2 is 45db, and the heart rate predicted value of the target subject is 68 times/min. Based on the signal-to-noise ratio of the target skin image including the forehead of the target object captured by the camera 1 and the signal-to-noise ratio of the target skin image including the forehead of the target object captured by the camera 2, the weight values are respectively determined to be 55% and 45%, and then the parameter of the input first health parameter predicted value is calculated to be 69.1 times/minute.
It is understood that, when the third health parameter prediction value is obtained according to each first health parameter prediction value and the corresponding weight value, the health type respectively described by each first health parameter prediction value is the same, for example, the health type is a heart rate value or a respiration value.
In an embodiment of the present application, a specific implementation of acquiring a target skin image of a target object includes:
skin images of a plurality of designated sites of a target object are acquired.
A specified number of skin images are selected from the skin images of the specified portions as target skin images.
In the present embodiment, since the skin image of the target object is captured by the image capturing apparatus, there is a possibility that skin images of a plurality of designated portions, such as an arm and a forehead, are captured.
In an embodiment of the present application, a specific implementation of extracting an image feature of a target skin image includes:
and performing skin area detection processing on the target skin image to obtain a skin area and a non-skin area in the target skin image.
And obtaining a first health parameter predicted value of the target object according to the skin area.
In this embodiment, since the target skin image acquired by the image acquisition device may include a non-skin area, when the health condition of the target object is directly known from the target skin image, the known health condition of the target object has a large error due to the influence of the non-skin area, so that the skin area detection processing needs to be performed on the target skin image to obtain the skin area and the non-skin area in the target skin image, so as to obtain the first health parameter prediction value of the target object only from the skin area.
Specifically, in order to extract and obtain information capable of representing the skin area from the skin area more accurately, the identification of the skin pixel is performed on each pixel point in the target skin image, so that the skin area and the non-skin area in the target skin image are obtained more accurately. And then, calculating to obtain a first health parameter predicted value of the target object according to the pixel value of the pixel point of the identified skin area.
For example, a signal-to-noise ratio of the skin area in the target skin image is calculated based on RGB values or gray values of skin pixels in the skin area in the target skin image, and a first health parameter prediction value is derived based on the signal-to-noise ratio and a corresponding preset health parameter prediction value.
In an embodiment of the present application, inputting the first health parameter prediction value into a trained first health parameter prediction model for processing, and outputting the second health parameter prediction value of the target object through the first health parameter prediction model specifically includes:
and determining a first health parameter predicted value of the target object according to the skin area in the target skin image.
And inputting the first health parameter predicted value into the first health parameter prediction model for processing to obtain a second health parameter predicted value.
In this embodiment, since the skin area can be used for the health condition of the target object, in order to obtain the health condition of the target object, the signal-to-noise ratio corresponding to the skin area may be calculated according to the skin area, so as to screen the information of the skin area through the signal-to-noise ratio corresponding to the skin area one by one, reflect the health condition of the target object based on the screened skin area, determine the first health parameter prediction value according to the skin area, and input the first health parameter prediction value into the first health parameter prediction model for processing, so as to obtain the second health parameter prediction value.
The signal-to-noise ratio is calculated according to the values corresponding to the channels of the skin pixels corresponding to the skin region in the target skin image, for example, the signal-to-noise ratio of the skin region is obtained by performing a weighting operation.
For example, for a target skin image, for each channel of skin pixels of a skin area of the target skin image, an average calculation is performed on the RGB values (color) or gray values (infrared) of the skin pixels to obtain the signal-to-noise ratio.
It can be understood that, when a plurality of target skin images corresponding to a specified portion are included, the signal-to-noise ratio corresponding to the specified portion is calculated according to the skin area corresponding to each target skin image. And when a plurality of designated parts exist, obtaining a first health parameter predicted value according to the skin area corresponding to the highest signal-to-noise ratio, and inputting the first health parameter predicted value into a first health parameter prediction model for processing to obtain a second health parameter predicted value.
In an embodiment of the present application, after obtaining the health parameter prediction value of the target subject, the method further includes:
acquiring a first health parameter predicted value and a corresponding first historical health parameter predicted value;
and determining first information to be pushed according to the first health parameter predicted value and the corresponding first historical health parameter predicted value.
And displaying the first information to be pushed in a set mode.
And/or acquiring a second historical health parameter predicted value corresponding to the second health parameter predicted value;
determining second information to be pushed according to the second health parameter predicted value and the second historical health parameter predicted value;
and displaying the second information to be pushed in a set mode.
As an example of the present application, the first information to be pushed or the second information to be pushed refers to information capable of directing the life of the target object. For example, when the health parameter prediction value indicates that the heart rate of the target object is high, the corresponding information to be pushed may be "amount of motion is reduced". In the embodiment, in order to timely remind a target object when health is abnormal, after a first health parameter predicted value of the target object is obtained, the first health parameter predicted value and a corresponding first historical health parameter predicted value are obtained; determining corresponding first information to be pushed according to the first health parameter predicted value and a corresponding first historical health parameter predicted value, and acquiring a second historical health parameter predicted value corresponding to the second health parameter predicted value; and determining second information to be pushed according to the second health parameter predicted value and the second historical health parameter predicted value so as to display the obtained first information to be pushed or the obtained second information to be pushed to the target object.
The example, old person A sits on seat 1 of bus, through first health parameter predictive value, knows old person A's heart rate signal to when confirming old person A has the heart disease based on the heart rate variability signal that the heart rate signal corresponds, to the information push screen propelling movement heart disease maintenance knowledge of seat 1 that old person A sat, so that old person A learns.
In application, the setting mode includes but is not limited to voice, video and other modes.
In an embodiment, after obtaining a first health parameter predicted value and a second health parameter predicted value of a target object, position information corresponding to the target object and information of push information equipment capable of displaying information to be pushed, which exists at a position corresponding to the target object, are obtained, and according to the information of the push information equipment, the target push information equipment is selected from the push information equipment existing at the position corresponding to the target object, so that the information to be pushed is displayed through the target push information equipment, where the information to be pushed includes one or more of first information to be pushed and second information to be pushed.
In an embodiment, in order to enable the effect of information pushing to be better and enable the target object to acquire the pushed information, if a plurality of pieces of pushed information equipment exist at a position corresponding to the target object, a relative angle between each piece of pushed information equipment and the target object is acquired, and the piece of pushed information equipment with the smallest relative angle is taken as the target pushed information equipment.
In the present embodiment, the relative angle refers to a relative angle between the front surface of the push information apparatus and the front surface of the target object. For example, when the information pushing device is a device with a screen, the relative angle refers to an included angle between a face of the device with the screen and the face orientation of the target object.
In an embodiment, in order to remind the target object in time when the health is abnormal, after obtaining the second health parameter prediction value of the target object, the method further includes: obtaining a historical health parameter predicted value in a preset time period according to the type information corresponding to the second health parameter predicted value, and determining corresponding second information to be pushed according to the historical health parameter predicted value and the second health parameter predicted value. In one embodiment, the health portrait of the target object is obtained according to one or more of the first health parameter predicted value, the second health parameter predicted value and the fourth health parameter predicted value of the target object.
In this embodiment, the health portrait is used to describe the health status of the target object, and the health parameters are obtained to update the data in the health portrait.
It is understood that the health of the target object can be managed in a targeted manner through the information recorded in the health picture, such as suggestion of disease prevention, possibly suggestion of sudden death prevention and attention requirement.
In an embodiment, the first health parameter prediction model is a diabetes prediction model.
The specific implementation that the first health parameter prediction value is input into a trained first health parameter prediction model for processing, and the second health parameter prediction value of the target object is output through the first health parameter prediction model comprises the following steps:
obtaining a heart rate signal of the target object according to the first health parameter predicted value; obtaining a heart rate variability signal of the target object according to the heart rate signal of the target object; and inputting the heart rate signal and the heart rate variability signal of the target object into a diabetes prediction model to obtain a second health parameter prediction value for describing whether diabetes exists.
In this embodiment, the skin color of a person changes color finely according to the flow of blood during the blood flow, and based on this, after the target skin image is acquired, the acquired target skin image may be processed by an algorithm such as mrc (max ratio combining), PCA (principal component analysis)/ICA (Independent component analysis), and the like, so as to obtain a first health parameter prediction value of the target object, such as a heart rate signal.
Illustratively, the target skin images are multiple, and each target skin image corresponds to a blood flow condition under the skin at a moment, and the change of the heart rate is reflected by the flow condition, and the multiple target skin images are processed by MRC or PCA, so that continuous heart rate change conditions can be obtained, and a heart rate signal can be obtained.
The heart rate signal comprises heart rate values corresponding to all time points in a period of time, and the heart rate variability signal of the target object in the period of time can be obtained according to the heart rate values of all the time points.
In the embodiment, a diabetes prediction model is established based on a deep learning method; the method comprises the steps of obtaining a plurality of training samples, training a diabetes prediction model according to the training samples, wherein the training samples comprise sample skin images, heart rate sample signals, heart rate variability sample signals, PPG curves and corresponding prediction results, so that after the heart rate signals are obtained based on target skin images, final prediction results are obtained according to the heart rate signals, the psychological variability signals and the diabetes prediction model, and whether the risk of diabetes exists or not is indicated in the prediction results.
In the embodiment, the disease can be effectively early warned in the early stage of the onset of diabetes through the prediction result, so that the target object can be timely reminded, and the condition that the optimal treatment time is missed due to late discovery is avoided.
As can be seen from the above embodiments, the embodiments of the present invention acquire a target skin image of a target object acquired by an image acquisition device; obtaining a heart rate signal of a target object according to the target skin image; obtaining a heart rate variability signal of the target object according to the heart rate signal of the target object; and inputting the heart rate signal and the heart rate variability signal of the target object into the diabetes prediction model. According to the embodiment of the invention, the heart rate signal can be obtained according to the target skin image contained in the video by acquiring the video of the target object, then whether the target object has the risk of suffering from diabetes is predicted, and the target object does not need to be detected in a contact mode, so that the infection risk brought to the target object by blood test is avoided, and the detection process is more convenient.
In one embodiment, in order to obtain the health condition of the target object more accurately, the wearable device further acquires the health parameter of the target object, such as the heart rate of the target object through the wearable watch.
Fig. 4 shows a block diagram of a device for predicting health parameters according to an embodiment of the present application, which corresponds to the method for predicting health parameters according to the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 4, the apparatus 100 includes:
an acquisition module 101, configured to acquire a target skin image of a target object;
the first prediction module 102 is configured to obtain a first health parameter prediction value of a target object according to a target skin image;
the second prediction module 103 is used for inputting the first health parameter prediction value into the trained first health parameter prediction model for processing, and outputting a second health parameter prediction value of the target object through the first health parameter prediction model; the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set.
In an embodiment, the acquiring module 101 is further configured to acquire a plurality of skin images of the target object; and selecting a skin image containing the characteristics of the target image from the plurality of skin images as the target skin image.
In an embodiment, the obtaining module 101 is further configured to select a first skin image containing a feature of the target image from the plurality of skin images; selecting a first characteristic pixel point from an image area of a target image characteristic contained in a first skin image; and determining the first skin image and the second skin image as target skin images, wherein the second skin image is a skin image of a second characteristic pixel point corresponding to the first characteristic pixel point in the plurality of skin images through a visual tracking algorithm.
In an embodiment, the acquiring module 101 is further configured to acquire a target skin image of the target object acquired by more than or equal to one camera acquiring device.
The first prediction module 102 is further configured to obtain a plurality of first health parameter prediction values according to a target skin image acquired by more than or equal to one camera shooting acquisition device.
The second prediction module 103 is further configured to determine a weight value of each first health parameter prediction value according to the signal-to-noise ratio of the target skin image corresponding to each first health parameter prediction value; obtaining a third health parameter predicted value according to each first health parameter predicted value and the corresponding weight value; and inputting the third health parameter predicted value into the trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model.
In an embodiment, the acquiring module 101 is further configured to acquire skin images of a plurality of designated parts of the target object; a specified number of skin images are selected from the skin images of the specified portions as target skin images.
In an embodiment, the first prediction module 102 is further configured to perform a skin region detection process on the target skin image, so as to obtain a skin region and a non-skin region in the target skin image; and obtaining a first health parameter predicted value of the target object according to the skin area.
In an embodiment, the apparatus further includes a third prediction module, configured to input the first health parameter prediction value and the target skin image into a trained second health parameter prediction model for processing, and output a fourth health parameter prediction value of the target subject through the second health parameter prediction model, where the second health parameter prediction model is obtained by training a second sample skin image and a corresponding second sample health parameter value as a training set.
In one embodiment, the apparatus further comprises a display module.
The display module is used for acquiring a first health parameter predicted value and a corresponding first historical health parameter predicted value; determining first information to be pushed according to the first health parameter predicted value and the first historical health parameter predicted value; and displaying the first information to be pushed in a set mode.
The display module is also used for acquiring a second historical health parameter predicted value corresponding to the second health parameter predicted value; determining second information to be pushed according to the second health parameter predicted value and the second historical health parameter predicted value; and displaying the second information to be pushed in a set mode.
The apparatus for predicting health parameters provided in this embodiment is used to implement any method for predicting health parameters in the method embodiments, where the functions of each module may refer to corresponding descriptions in the method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one processor is shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the steps of any of the various above-described embodiments of the method of predicting a health parameter being implemented by the processor 50 when the computer program 52 is executed.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 5, and does not constitute a limitation to the terminal device 5, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
In some embodiments, the terminal device is connected to at least one camera device for acquiring the captured image from the camera device.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), off-the-shelf Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5 in other embodiments, such as a plug-in hard disk provided on the terminal device 5, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, where the terminal device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of predicting a health parameter, comprising:
acquiring a target skin image of a target object;
obtaining a first health parameter predicted value of the target object according to the target skin image;
inputting the first health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model; wherein the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set.
2. The method of claim 1, wherein said acquiring a target skin image of a target subject comprises:
acquiring a plurality of skin images of the target object;
selecting a skin image containing a target image feature from the plurality of skin images as the target skin image.
3. The method of claim 2, wherein said selecting a skin image containing a target image feature from the plurality of skin images as the target skin image comprises:
selecting a first skin image containing the target image feature from the plurality of skin images;
selecting a first feature pixel point from an image region of the target image feature contained in the first skin image;
and determining the first skin image and a second skin image as the target skin image, wherein the second skin image is a skin image of a second characteristic pixel point corresponding to the first characteristic pixel point, which can be obtained by a visual tracking algorithm, in the plurality of skin images.
4. The method according to claim 1, wherein a target skin image of a target object acquired by more than or equal to one camera acquisition device is acquired, and a plurality of first health parameter predicted values are obtained according to the target skin image acquired by different camera acquisition devices;
inputting the first health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model, wherein the method comprises the following steps:
determining a weight value of each first health parameter predicted value according to the signal-to-noise ratio of the target skin image corresponding to each first health parameter predicted value;
obtaining a third health parameter predicted value according to each first health parameter predicted value and the corresponding weight value;
inputting the third health parameter predicted value into a trained first health parameter prediction model for processing, and outputting a second health parameter predicted value of the target object through the first health parameter prediction model.
5. The method of claim 1, wherein the deriving a first health parameter prediction value for the target subject from the target skin image comprises:
performing skin area detection processing on the target skin image to obtain a skin area and a non-skin area in the target skin image;
and obtaining a first health parameter predicted value of the target object according to the skin area.
6. The method of claim 1, further comprising:
inputting the first health parameter predicted value and the target skin image into a trained second health parameter prediction model for processing, and outputting a fourth health parameter predicted value of the target object through the second health parameter prediction model, wherein the second health parameter prediction model is obtained by taking a second sample skin image and a corresponding second sample health parameter value as a training set for training.
7. The method of any one of claims 1-6, further comprising:
acquiring a first health parameter predicted value and a corresponding first historical health parameter predicted value;
determining first information to be pushed according to the first health parameter predicted value and the first historical health parameter predicted value;
displaying the first information to be pushed in a set mode;
or/and acquiring a second historical health parameter predicted value corresponding to the second health parameter predicted value;
determining second information to be pushed according to the second health parameter predicted value and the second historical health parameter predicted value;
and displaying the second information to be pushed in a set mode.
8. An apparatus for predicting a health parameter, comprising:
the acquisition module is used for acquiring a target skin image of a target object;
the first prediction module is used for obtaining a first health parameter prediction value of the target object according to the target skin image;
the second prediction module is used for inputting the first health parameter prediction value into a trained first health parameter prediction model for processing, and outputting a second health parameter prediction value of the target object through the first health parameter prediction model; wherein the first health parameter prediction model is obtained by training a first sample skin image and a corresponding first sample health parameter value as a training set.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202111438525.XA 2021-11-30 2021-11-30 Method, device, terminal equipment and storage medium for predicting health parameters Pending CN114219772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111438525.XA CN114219772A (en) 2021-11-30 2021-11-30 Method, device, terminal equipment and storage medium for predicting health parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111438525.XA CN114219772A (en) 2021-11-30 2021-11-30 Method, device, terminal equipment and storage medium for predicting health parameters

Publications (1)

Publication Number Publication Date
CN114219772A true CN114219772A (en) 2022-03-22

Family

ID=80699121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111438525.XA Pending CN114219772A (en) 2021-11-30 2021-11-30 Method, device, terminal equipment and storage medium for predicting health parameters

Country Status (1)

Country Link
CN (1) CN114219772A (en)

Similar Documents

Publication Publication Date Title
US20180122073A1 (en) Method and device for determining vital parameters
US10004410B2 (en) System and methods for measuring physiological parameters
Lamonaca et al. Health parameters monitoring by smartphone for quality of life improvement
US20090216092A1 (en) System for analyzing eye responses to accurately detect deception
Zhang et al. Heart rate extraction based on near-infrared camera: Towards driver state monitoring
WO2019140155A1 (en) Systems, devices, and methods for tracking and/or analyzing subject images and/or videos
KR101426750B1 (en) System for mearsuring heart rate using thermal image
WO2019173237A1 (en) Systems, devices, and methods for tracking and analyzing subject motion during a medical imaging scan and/or therapeutic procedure
KR20160115501A (en) Method and Apparatus for acquiring a biometric information
US20180279935A1 (en) Method and system for detecting frequency domain cardiac information by using pupillary response
Nie et al. SPIDERS: Low-cost wireless glasses for continuous in-situ bio-signal acquisition and emotion recognition
US20220218198A1 (en) Method and system for measuring pupillary light reflex with a mobile phone
Colantonio et al. Computer vision for ambient assisted living: Monitoring systems for personalized healthcare and wellness that are robust in the real world and accepted by users, carers, and society
US10070787B2 (en) System and method for detection and monitoring of a physical condition of a user
KR20210140808A (en) A smart inspecting system, method and program for nystagmus using artificial intelligence
Rescio et al. Ambient and wearable system for workers’ stress evaluation
US10631727B2 (en) Method and system for detecting time domain cardiac parameters by using pupillary response
Khanal et al. Physical exercise intensity monitoring through eye-blink and mouth’s shape analysis
KR20140057867A (en) System for mearsuring stress using thermal image
CN114219772A (en) Method, device, terminal equipment and storage medium for predicting health parameters
Gupta et al. A supervised learning approach for robust health monitoring using face videos
Remeseiro et al. Automatic eye blink detection using consumer web cameras
Bevilacqua et al. Proposal for non-contact analysis of multimodal inputs to measure stress level in serious games
CN113011286B (en) Squint discrimination method and system based on deep neural network regression model of video
Malini Non-Contact Heart Rate Monitoring System using Deep Learning Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination