CN112382390A - Method, system and storage medium for generating health assessment report - Google Patents

Method, system and storage medium for generating health assessment report Download PDF

Info

Publication number
CN112382390A
CN112382390A CN202011239452.7A CN202011239452A CN112382390A CN 112382390 A CN112382390 A CN 112382390A CN 202011239452 A CN202011239452 A CN 202011239452A CN 112382390 A CN112382390 A CN 112382390A
Authority
CN
China
Prior art keywords
user
body part
image
assessment report
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011239452.7A
Other languages
Chinese (zh)
Inventor
邢玉川
于震江
何钟强
赵俊
池京男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Tuoxian Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202011239452.7A priority Critical patent/CN112382390A/en
Publication of CN112382390A publication Critical patent/CN112382390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The disclosure provides a method, a system and a storage medium for generating a health assessment report, and relates to the technical field of computers. In the present disclosure, sensory data relating to a health condition of a user is obtained; acquiring an image of a body part of a user; a health assessment report is generated for the user based on the sensed data and the image of the body part of the user. The health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, and compared with a mode of analyzing the health condition of the user only by means of the data acquired by the sensor, the health assessment result is more accurate and comprehensive.

Description

Method, system and storage medium for generating health assessment report
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, a system, and a storage medium for generating a health assessment report.
Background
Nowadays, people pay more and more attention to their health conditions, and the traditional approach for acquiring their health conditions is to go to hospitals or physical examination institutions for examination. It often takes several days to go to a physical examination facility or a hospital to acquire the health condition of the patient, and high examination cost is required.
With the development of intelligent sensor technology, some medical-grade intelligent sensors are applied to intelligent devices such as intelligent watches and intelligent scales, and people can obtain indexes such as electrocardiogram, heart rate and body fat rate without going out.
Disclosure of Invention
The inventor finds that in the related art, it is difficult to accurately and comprehensively evaluate the health condition of the user only based on the sensing data such as the electrocardiographic data, the heart rate data, the body fat data and the like.
To this end, the disclosed embodiments accurately and comprehensively assess the health condition of the user based on the sensing data related to the health condition of the user in combination with the images of the body parts.
According to some embodiments of the present disclosure, there is provided a method of generating a health assessment report, comprising: acquiring sensing data related to the health condition of a user; acquiring an image of a body part of a user; generating a health assessment report for the user from the sensory data and the image of the body part of the user.
In some embodiments, said acquiring an image of a body part of a user comprises: detecting key point coordinates of a body part; adjusting the position of the preview frame according to the key point coordinates of the body part; in a case where the image of the body part is completely displayed in the preview frame, the image of the body part is acquired.
In some embodiments, the adjusting the position of the preview frame according to the key point coordinates of the body part includes: determining a transformation coordinate for transforming the key point coordinate of the body part from a screen coordinate system to a preview coordinate system; and adjusting the position of the preview frame according to the transformed coordinates.
In some embodiments, further comprising: carrying out gray level processing on the image of the body part of the user to obtain a gray level image; performing convolution processing on the gray level image; calculating the variance of the gray level image after convolution processing; in the case that the variance is greater than a preset threshold, performing a step of generating a health assessment report of the user; in case the variance is not larger than a preset threshold, the step of acquiring an image of the body part of the user is repeatedly performed.
In some embodiments, further comprising: one or more of a frame extraction process and a reduction process are performed on the image of the body part of the user before the image of the body part of the user is subjected to the gradation process.
In some embodiments, further comprising: detecting a left rotation angle and a right rotation angle of a body part of a user; performing a step of acquiring an image of a body part of a user in a case where a left rotation angle and a right rotation angle of the body part are within a preset range; and prompting the user to rotate the body part if the left rotation angle or the right rotation angle of the body part is not within a preset range, and repeatedly executing the step of detecting the left rotation angle and the right rotation angle of the body part of the user.
In some embodiments, further comprising: calculating the ratio of the size of an enclosing frame enclosing the body part of the user to the size of a preview frame; performing a step of acquiring an image of a body part of the user in a case where the ratio is within a preset range; and under the condition that the ratio is not in the preset range, prompting the user to approach or leave the camera, and repeatedly executing the step of calculating the ratio of the size of the bounding box to the size of the preview box.
In some embodiments, the body part comprises a human face or a tongue.
In accordance with still further embodiments of the present disclosure, there is provided a system for generating a health assessment report, comprising:
the sensor is used for acquiring sensing data related to the health condition of the user;
a camera for acquiring an image of a body part of a user;
a processor configured to perform the method of generating a wellness assessment report of any of the embodiments.
In some embodiments, the sensor control module further comprises one or more of: the power frequency wave trap is used for filtering power frequency interference of the sensing data; the filter is used for filtering electromyographic signal interference of the sensing data; the signal amplifier is used for carrying out signal gain amplification; and the signal converter is used for converting the sensing data from an analog signal to a digital signal.
In some embodiments, the sensor comprises one or more of an Electrocardiography (ECG) sensor, photoplethysmography (PPG) sensor, Bioelectrical Impedance Measurement (BIM) sensor.
In some embodiments, further comprising: and the voice interaction module is configured to send a voice prompt to prompt the user to adjust the pose.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the method of generating a health assessment report of any of the embodiments.
Drawings
The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure can be understood more clearly from the following detailed description, which proceeds with reference to the accompanying drawings.
It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
FIG. 1 illustrates a flow diagram of a method of generating a wellness assessment report in accordance with some embodiments of the present disclosure.
Fig. 2 illustrates a flow diagram of a method of acquiring an image of a body part of a user, according to some embodiments of the present disclosure.
FIG. 3a illustrates a schematic view of a user U1 with a low height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
Fig. 3b illustrates a schematic diagram of capturing the face of a user U1, according to some embodiments of the present disclosure.
FIG. 4a illustrates a schematic view of a user U2 with a high height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
Fig. 4b shows a schematic diagram of capturing the face of a user U2, according to some embodiments of the present disclosure.
FIG. 5 shows a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
FIG. 6 illustrates a flow diagram for determining whether a variance of an image is greater than a preset threshold, according to some embodiments of the present disclosure.
FIG. 7 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
FIG. 8 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with still further embodiments of the present disclosure.
FIG. 9 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with yet further embodiments of the present disclosure.
FIG. 10 illustrates a schematic diagram of a system that generates a wellness assessment report, according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
FIG. 1 illustrates a flow diagram of a method of generating a wellness assessment report in accordance with some embodiments of the present disclosure. The method may be performed, for example, by a system that generates a health assessment report.
As shown in fig. 1, the method of this embodiment includes steps 120-160.
At step 120, sensory data relating to the health condition of the user is obtained.
The sensing data may be acquired from medical-grade smart body sensors such as an Electro-Car-Diogram (ECG) sensor, a photoplethysmography (PPG) sensor, a Bioelectrical Impedance Measurement (BIM) sensor, and the like, but is not limited to these examples. The sensors have the characteristics of low power consumption, small size, low cost and high precision, and can be widely applied to smart phones, smart watches, smart bracelets and smart scales, so that a system for generating health assessment reports can be conveniently acquired from the smart devices.
In some embodiments, the sensed data may also be processed by one or more of: the method comprises the steps of filtering power frequency interference of sensing data by using a power frequency trap, filtering electromyographic signal interference of the sensing data by using a filter (such as a Butterworth low-pass filter), performing signal gain amplification by using a signal amplifier, and converting a sensor analog signal into a digital signal by using a signal converter.
At step 140, an image of a body part of a user is acquired.
Capturing an image of a body part of a user includes capturing an image of a face, tongue, or other body part. For example, an industrial high-definition 1100-ten thousand-pixel camera module can be used for completing image acquisition of human body parts such as the face, the tongue and the like of a human body.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
And acquiring detection values of physiological indexes of the user, such as heart rate, electrocardiogram, respiratory rate, body fat rate, blood oxygen concentration, blood pressure and the like according to the sensing data, and analyzing the images of all body parts of the user through expert knowledge to obtain the evaluation scores of items of the user, such as the circulatory system, the respiratory system, the digestive system, the endocrine system, the immune system, the skeletal system, the skin system, the nutritional state and the like. Therefore, the health assessment report of the user may include, for example, the detected values of physiological indexes such as heart rate, electrocardiogram, respiratory rate, body fat rate, blood oxygen concentration, blood pressure, etc., the comprehensive assessment scores of circulatory system, respiratory system, digestive system, endocrine system, immune system, skeletal system, skin system, and nutritional status, and the health score, etc. of the user.
Furthermore, the health condition change can be determined and recorded in the health assessment report of the user not only from the latest sensing data and the image of the body part of the user, but also by combining the historical sensing data and the image of the body part of the user.
In addition, sensing data and images of body parts of the user can be collected regularly, and health assessment reports can be generated regularly, so that the user can know the health condition of the user in time.
In addition, the health assessment report can be generated and stored in an electronic mode, so that the user can conveniently search the health assessment report, an expert can conveniently analyze the change condition of the health condition according to the historical health condition of the user, and the user can be reminded in time when the health condition develops towards the deterioration direction.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, and compared with a mode of analyzing the health condition of the user only by means of the data acquired by the sensor, the health assessment result is more accurate and comprehensive. Meanwhile, the method of generating the health assessment report by this embodiment is cost-effective and time-saving compared to the way of acquiring the health assessment report by a conventional physical examination institution or hospital. The method of the embodiment can automatically generate the health assessment report, so that the user can conveniently acquire the comprehensive health condition of the user, and select whether to seek medical advice in the field or not according to the health assessment report.
Fig. 2 illustrates a flow diagram of a method of acquiring an image of a body part of a user, according to some embodiments of the present disclosure.
As shown in FIG. 2, the method of this embodiment is further described with respect to step 140 in the embodiment of FIG. 1, which includes steps 141 and 143.
At step 141, the coordinates of key points of the body part are detected.
The key point coordinates of the body part may be, for example, the coordinates of a key point of the face or tongue (e.g., upper left corner of the face), for example, labeled as D (x)d,yd) For example, the Multi-Task Convolutional Neural Network (MTCNN) algorithm may be used for the cameraAnd processing the acquired preview image to acquire the point coordinates of the upper left corner of the frame where the face of the user is located in real time.
In step 142, the position of the preview frame is adjusted according to the key point coordinates of the body part.
In some embodiments, adjusting the position of the preview box according to the key point coordinates of the body part comprises: determining a transformation coordinate for transforming the key point coordinate of the body part from a screen coordinate system to a preview coordinate system; and adjusting the position of the preview frame according to the transformed coordinates.
In step 143, an image of the body part is acquired with the image of the body part fully displayed in the preview frame.
In some embodiments, after the adjustment is completed, i.e., in a case where the image of the body part is completely displayed in the preview frame, the user is prompted by voice that the face is about to be photographed, after the photographing is completed, the user is prompted by voice to extend the tongue, the preview image is processed by using the tongue detection model, and when the tongue is detected to extend, the user is prompted by voice to remain still and complete the photographing. The collected facial and tongue photographs are then uploaded to a processor for processing, and a health assessment report is further generated.
According to the embodiment, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) of the user is completely displayed in the preview frame, the user can clearly know that the body part is collected, the situation that the position of the user is adjusted ceaselessly is avoided, the collection efficiency is improved, and meanwhile the photographing experience of users with different heights can be improved. Compared with a camera with a fixed physical shooting angle, the method of the embodiment can enable the body parts of users with different heights to be previewed normally in the preview frame. Compared with the method of dynamically adjusting the angle of the camera to realize normal preview of the body part of the user, the method needs to adopt the camera with the high price and the adjustable angle, and the equipment mold needs to be additionally provided with corresponding structural support, so that the cost is greatly reduced.
The method for adjusting the preview box will be described below by taking a user U1 with a low height and a user U2 with a high height as examples.
FIG. 3a illustrates a schematic view of a user U1 with a low height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
Because the physical shooting angle of the camera is fixed, if the traditional preview mode of the camera is adopted, the situation that the head portrait shown in the figure 3a appears below the preview area and cannot be normally previewed occurs when the height of the user is low, and the user experience of the user using the system is influenced.
Fig. 3b illustrates a schematic diagram of capturing the face of a user U1, according to some embodiments of the present disclosure.
As shown in FIG. 3B, the top left corner of the screen is the origin of coordinates, labeled as Point A (0, 0), and the camera preview box (coordinates in the top left corner are labeled as B (x)b,yb) Dot) is shown as a dashed-line box, the length and width of the preview box are 1/2 of the length and width of the full screen, respectively, and the black solid-line box inside the preview box is a bounding box that completely encloses the body part of the user (the coordinate of the upper left corner is marked as C (x)c,yc) Dots), the length and width of the bounding box are 1/4 of the length and width, respectively, of the full screen. Key point coordinates D (x) of the upper left corner of the face of user U1 are obtainedd,yd) D coordinates are coordinates in a preview coordinate system with point B as an origin, and point E (x)e,ye) Is a point coinciding with point C in a preview coordinate system with point B as an origin, e.g. the coordinate of point E may be determined from point D, e.g. x may be takene=xd-10,ye=yd-10. Assume that the pixel density of the screen is m and the pixel density of the preview image is n. For example, the screen pixel density m is 100, the image pixel density n of the preview frame is 300, and the coordinates of the point C at the upper left corner of the surrounding frame are (400, 300).
And (3) according to the principle that the head position of the user with different heights is dynamically adjusted to enable the head image to be always displayed at the central hollow part, namely, the coordinate of the B point is calculated by using the acquired coordinate of the D point to adjust the position of the camera preview frame. Since E and C are nearly coincident, there is m/n (x) according to the coordinate transformatione,ye)+(xb,yb)=(xc,yc) So that D point to B point can be obtainedCoordinate relationship of points, i.e. by (x)b,yb)=(xc,yc)-m/n(xe,ye) Resulting in a value of the transformed coordinate B.
When the height of the user U1 is 150cm, the D point coordinate is (610 ), then (x)e,ye)=(xd-10,yd-10)=(600,600)。
Calculating the coordinate (x) of the point B by the formulab,yb)=(xc,yc)-m/n(xe,ye) The position of the preview frame is adjusted according to the transformation coordinate B (200, 100), that is, the marginLeft of the preview frame is set to 200, and the marginTop of the preview frame is 100, that is, the preview frame can be adjusted to the position where the face image of the user U1 is displayed in its entirety.
FIG. 4a illustrates a schematic view of a user U2 with a high height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
When the user is tall, the situation that the user cannot normally preview the preview area by pushing the head of the user against the top of the preview frame as shown in fig. 4a appears, and the user experience of using the system by the user is affected.
Fig. 4b shows a schematic diagram of capturing the face of a user U2, according to some embodiments of the present disclosure.
When the height of the user U2 is 180cm, for example, the D point coordinate is (610, 10), for example, (x)e,ye)=(xd-10,yd-10)=(600,0)。
Calculating the coordinate (x) of the point B by the formulab,yb)=(xc,yc)-m/n(xe,ye) The position of the preview frame is adjusted according to the transformation coordinate B (200, 300), that is, the marginLeft of the preview frame is set to 200, and the marginTop of the preview frame is 300, that is, the preview frame can be adjusted to the position where the face image of the user U2 is completely displayed (400, 300) -100/300(600, 0) ═ 200, 300).
FIG. 5 shows a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
As shown in fig. 5, the method of this embodiment includes steps 120-160.
The embodiment of fig. 5 differs from the embodiment of fig. 1 only in that step 150 is also included. The differences between fig. 5 and fig. 1 will be described in detail below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
At step 140, an image of a body part of a user is acquired.
In step 150, it is determined whether the variance of the image is greater than a preset threshold.
FIG. 6 illustrates a flow diagram for determining whether a variance of an image is greater than a preset threshold, according to some embodiments of the present disclosure.
As shown in fig. 6, step 150 includes 151-157.
In step 151, a frame extraction process is performed on the image of the body part of the user.
Assuming that the camera collects 30 frames of preview images per second, the frame extraction processing is performed on the 30 frames of preview images, and 10 frames of images are extracted to perform the processing of the subsequent step 152-157. The frame extraction processing can improve the calculation speed, thereby meeting the real-time requirement of image acquisition and improving the user experience.
In step 152, image reduction processing is performed on the body part of the user.
The reduction processing can improve the calculation speed, thereby meeting the real-time requirement of image acquisition and improving the user experience.
In step 153, the image of the user's body part is subjected to grayscale processing to obtain a grayscale image.
The gray processing can reduce the calculation amount, thereby improving the calculation speed, meeting the real-time requirement of image acquisition and improving the user experience.
At step 154, the grayscale image is convolved.
For example, the gray image may be convolved using the laplacian operator as a convolution kernel. Assuming that the convolution kernel matrix is represented as
Figure BDA0002767876320000091
The gray scale image matrix is expressed as
Figure BDA0002767876320000092
Firstly, the convolution kernel matrix is turned over horizontally and vertically to obtain a transformation matrix W 'corresponding to the convolution kernel, and the transformation matrix W' is expressed as
Figure BDA0002767876320000093
The transformation matrix W' is then convolved with the grayscale image matrix O to obtain a convolution result D, which is expressed as
Figure BDA0002767876320000094
Figure BDA0002767876320000095
That is to say that the first and second electrodes,
Figure BDA0002767876320000096
Figure BDA0002767876320000097
wherein the content of the first and second substances,
D11=O11×W22+O12×W21+O21×W12+O22×W11
D12=O12×W22+O13×W21+O22×W12+O23×W11
D21=O21×W22+O22×W21+O31×W12+O32×W11
D22=O22×W22+O23×W21+O32×W12+O33×W11
in step 155, the variance of the convolved gray image is calculated.
And calculating the variance according to all elements in the gray-scale image matrix after convolution processing.
And comparing the variance with a preset threshold, wherein if the variance is greater than the preset threshold, the definition of the image meets the preset requirement, and if the variance is not greater than the preset threshold, the definition of the image cannot meet the preset requirement, and the image needs to be acquired again.
In step 156, in the event that the variance is greater than a preset threshold, step 160 is performed.
In step 157, in case the variance is not greater than the preset threshold, step 155 is repeatedly performed.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the embodiment, whether the image to be acquired meets the preset definition requirement is determined by calculating the variance of the image of the body part of the user, so that a foundation is laid for generating a health assessment report, and the accuracy of the health assessment report can be improved.
FIG. 7 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
As shown in fig. 7, the method of this embodiment includes steps 120-160.
The embodiment of fig. 7 differs from the embodiment of fig. 1 only in that step 130 is also included. Only the differences between fig. 7 and fig. 1 will be described below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
In step 130, it is detected whether the left and right rotation angles of the body part of the user are within a preset range.
For example, coordinates of key points such as the left canthus, the right canthus, the nose tip, the left mouth corner, the right mouth corner, the lower jaw, and the left upper corner, the right upper corner, the center point of the tongue, 2/3 left coordinate of the tongue, 2/3 right tongue, etc. of the face may be obtained by using a key point detection method of an Open Source Computer Vision Library (openCV), and left and right rotation angles of the face and the tongue may be calculated respectively by using the openCV, and a preset range of left and right rotation angles of the face and the tongue may be set to be 15 ° or less, that is, if the calculated angle of the left rotation of the face or the tongue is greater than 15 °, it is determined that the user needs to be prompted to rotate the face or the tongue to the right, and if the calculated angle of the right rotation of the face or the tongue is greater than 15 °, it is determined that the user needs to rotate.
In case that the left and right rotation angles of the body part are within the preset range, step 140 is performed.
In case that the left rotation angle or the right rotation angle of the body part is not within the preset range, the user is prompted to rotate the body part, and step 130 is repeatedly performed.
At step 140, acquiring an image of a body part of a user;
at step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the above embodiment, whether the left rotation angle and the right rotation angle of the body part of the user are within the preset range or not is judged, the user can be automatically prompted to rotate the body part (for example, the face or the tongue) leftwards or rightwards, the user can clearly know whether the angle of the body part meets the requirement or not, the user is prevented from continuously adjusting the angle, the acquisition efficiency is improved, and meanwhile the photographing experience of the user can be improved.
FIG. 8 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with still further embodiments of the present disclosure.
As shown in fig. 8, the method of this embodiment includes steps 120-160.
The embodiment of fig. 8 differs from the embodiment of fig. 1 only in that step 130' is also included. Only the differences between fig. 8 and fig. 1 will be described below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
In step 130', it is calculated whether the ratio of the size of the bounding box enclosing the body part of the user to the size of the preview box is within a preset range.
Detecting the ratio of the size of an enclosing frame of a face or a tongue part enclosing the body part of the user to the size of a preview frame of the image by using an openCV target detection algorithm, for example, setting the preset range of the ratio to be 1/4-1/2, namely judging that the user is too far away from the camera when the ratio of the detection result is smaller than the minimum value of the preset range, and prompting the user to approach the camera; and when the ratio of the detection result is larger than the maximum value of the preset range, judging that the distance from the camera is too close, and prompting a user to keep away from the camera.
And in the case that the ratio is not within the preset range, prompting the user to approach or move away from the camera, and repeatedly executing the step 130'.
In the case that the ratio is within the preset range, step 140 is performed.
At step 140, an image of a body part of a user is acquired.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the above embodiment, can indicate the user to be close to or keep away from the camera automatically, make the user know clearly whether the distance of health position and camera satisfies the demands, avoid the user to adjust the position ceaselessly to improve collection efficiency, can also improve user's the experience of shooing simultaneously.
FIG. 9 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with yet further embodiments of the present disclosure.
As shown in fig. 9, the method of this embodiment includes steps 120-160.
At step 120, sensory data relating to the health condition of the user is obtained.
The execution of step 130 and step 130' are not in sequential order. The only difference is that: if step 130 is performed first, step 130' is performed in case that the left and right rotation angles of the body part are within the preset range, and step 140 is performed in case that the ratio is within the preset range; if step 130' is performed first, step 130 is performed if the ratio is within the preset range; in case that the left and right rotation angles of the body part are within the preset range, step 140 is performed. A detailed description of the execution of step 130 is given below. An embodiment of performing step 130' first may be found with reference to this embodiment.
In step 130, it is detected whether the left and right rotation angles of the body part of the user are within a preset range.
In case that the left rotation angle or the right rotation angle of the body part is not within the preset range, the user is prompted to rotate the body part, and step 130 is repeatedly performed.
In case that the left and right rotation angles of the body part are within the preset range, step 130' is performed.
In step 130', it is calculated whether the ratio of the size of the bounding box enclosing the body part of the user to the size of the preview box is within a preset range.
And in the case that the ratio is not within the preset range, prompting the user to approach or move away from the camera, and repeatedly executing the step 130'.
In the case that the ratio is within the preset range, step 140 is performed.
At step 140, an image of a body part of a user is acquired.
In step 150, it is determined whether the variance of the image is greater than a preset threshold.
In case the variance of the image is larger than a preset threshold, step 160 is performed.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, and compared with a mode of analyzing the health condition of the user only by means of the data acquired by the sensor, the health assessment result is more accurate and comprehensive. Whether the image is clear or not can be determined through calculation, so that the acquired image can meet the definition requirement. Meanwhile, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) is completely displayed in the preview frame, a user can clearly know that the body part is collected, the user is prevented from continuously adjusting the position, the collection efficiency is improved, and the photographing experience of users with different heights can be improved. In addition, the user can be automatically prompted to rotate the body part leftwards or rightwards and to approach or keep away from the camera, so that the user can definitely know whether the angle of the body part and the distance between the body part and the camera meet requirements, and the user is prevented from continuously adjusting the angle or the position, therefore, the acquisition efficiency is improved, and meanwhile, the photographing experience of the user can be improved.
FIG. 10 illustrates a schematic diagram of a system that generates a wellness assessment report, according to some embodiments of the present disclosure.
As shown in FIG. 10, the system 1000 for generating a health assessment report of this embodiment includes: the sensor 1010, camera 1020, and processor 1030, in some embodiments, also include a voice interaction module 1040.
In some embodiments, after the system is operated, the system is initialized, and the system initialization may complete initialization operations such as sensor self-test, network state self-test, server time synchronization, face recognition service initialization, and the like. And entering a login management interface after the initialization is successful, wherein the login management comprises two modes of face recognition login and user two-dimensional code scanning login, and obtaining the unique user identifier after the login is successful.
A sensor 1010 for acquiring sensory data related to the health condition of the user. The sensors include one or more of an Electrocardiography (ECG) sensor, a photoplethysmography (PPG) sensor, and a Bioelectrical Impedance Measurement (BIM) sensor.
The sensor 1010 further includes: one or more of a power frequency trap 1011, a filter 1012, a signal amplifier 1013, and a signal converter 1014. The power frequency wave trap 1011 is used for filtering power frequency interference of the sensing data; a filter 1012 for filtering electromyographic signal interference of the sensing data; a signal amplifier 1013, such as a butterworth low-pass filter, for performing signal gain amplification; the signal converter 1014 is configured to convert an analog signal of the sensing data into a digital signal.
A camera 1020 for capturing images of a body part of a user.
A processor 1030 configured to generate a health assessment report for the user based on the sensed data acquired by the sensor and the image captured by the camera.
The voice interaction module 1040 is configured to send a voice prompt to prompt the user to adjust the pose, for example, to prompt the user to rotate the body part, or to prompt the user to move closer to or farther away from the camera.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, so that the health assessment result is more accurate and comprehensive. Whether the image is clear or not can be determined through calculation, so that the acquired image can meet the definition requirement. Meanwhile, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) is completely displayed in the preview frame, a user can clearly know that the body part is collected, the user is prevented from continuously adjusting the position, the collection efficiency is improved, and the photographing experience of users with different heights can be improved. In addition, the user can be automatically prompted to rotate the body part leftwards or rightwards and to approach or keep away from the camera, so that the user can definitely know whether the angle of the body part and the distance between the body part and the camera meet requirements, and the user is prevented from continuously adjusting the angle or the position, thereby improving the acquisition efficiency and improving the photographing experience of the user.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-non-transitory readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (13)

1. A method of generating a health assessment report, comprising:
acquiring sensing data related to the health condition of a user;
acquiring an image of a body part of a user;
generating a health assessment report for the user from the sensory data and the image of the body part of the user.
2. The method of generating a wellness assessment report of claim 1 wherein the capturing of the image of the body part of the user comprises:
detecting key point coordinates of a body part;
adjusting the position of the preview frame according to the key point coordinates of the body part;
in a case where the image of the body part is completely displayed in the preview frame, the image of the body part is acquired.
3. The method of generating a wellness assessment report of claim 2 wherein the adjusting the position of the preview box based on the key point coordinates of the body part comprises:
determining a transformation coordinate for transforming the key point coordinate of the body part from a screen coordinate system to a preview coordinate system;
and adjusting the position of the preview frame according to the transformed coordinates.
4. The method of generating a health assessment report according to any of claims 1-3, further comprising:
carrying out gray level processing on the image of the body part of the user to obtain a gray level image;
performing convolution processing on the gray level image;
calculating the variance of the gray level image after convolution processing;
in the case that the variance is greater than a preset threshold, performing a step of generating a health assessment report of the user;
in case the variance is not larger than a preset threshold, the step of acquiring an image of the body part of the user is repeatedly performed.
5. The method of generating a wellness assessment report of claim 4 further comprising:
one or more of a frame extraction process and a reduction process are performed on the image of the body part of the user before the image of the body part of the user is subjected to the gradation process.
6. The method of generating a health assessment report according to any of claims 1-3, further comprising:
detecting a left rotation angle and a right rotation angle of a body part of a user;
performing a step of acquiring an image of a body part of a user in a case where a left rotation angle and a right rotation angle of the body part are within a preset range;
and prompting the user to rotate the body part if the left rotation angle or the right rotation angle of the body part is not within a preset range, and repeatedly executing the step of detecting the left rotation angle and the right rotation angle of the body part of the user.
7. The method of generating a health assessment report according to any of claims 1-3, further comprising:
calculating the ratio of the size of an enclosing frame enclosing the body part of the user to the size of a preview frame;
performing a step of acquiring an image of a body part of the user in a case where the ratio is within a preset range;
and under the condition that the ratio is not in the preset range, prompting the user to approach or leave the camera, and repeatedly executing the step of calculating the ratio of the size of the bounding box to the size of the preview box.
8. The method of generating a wellness assessment report according to any one of claims 1-3 wherein the body part comprises a human face or a tongue.
9. A system for generating a health assessment report, comprising:
the sensor is used for acquiring sensing data related to the health condition of the user;
a camera for acquiring an image of a body part of a user;
a processor configured to perform the method of generating a health assessment report of any of claims 1-8.
10. The system for generating a wellness assessment report of claim 9 the sensor control module further comprising one or more of:
the power frequency wave trap is used for filtering power frequency interference of the sensing data;
the filter is used for filtering electromyographic signal interference of the sensing data;
the signal amplifier is used for carrying out signal gain amplification;
and the signal converter is used for converting the sensing data from an analog signal to a digital signal.
11. A system for generating a health assessment report in accordance with claim 9, the sensors comprising one or more of an electrocardiography, ECG, sensor, photoplethysmography, PPG, sensor, bioelectrical impedance measurement, BIM, sensor.
12. The system for generating a wellness assessment report of claim 9 further comprising:
and the voice interaction module is configured to send a voice prompt to prompt the user to adjust the pose.
13. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of generating a health assessment report of any of claims 1-8.
CN202011239452.7A 2020-11-09 2020-11-09 Method, system and storage medium for generating health assessment report Pending CN112382390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239452.7A CN112382390A (en) 2020-11-09 2020-11-09 Method, system and storage medium for generating health assessment report

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239452.7A CN112382390A (en) 2020-11-09 2020-11-09 Method, system and storage medium for generating health assessment report

Publications (1)

Publication Number Publication Date
CN112382390A true CN112382390A (en) 2021-02-19

Family

ID=74578118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239452.7A Pending CN112382390A (en) 2020-11-09 2020-11-09 Method, system and storage medium for generating health assessment report

Country Status (1)

Country Link
CN (1) CN112382390A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022178934A1 (en) * 2021-02-26 2022-09-01 平安科技(深圳)有限公司 Health testing method and apparatus, and device and storage medium
CN115190538A (en) * 2022-09-09 2022-10-14 朔至美(南通)科技有限公司 Health data transmission system and method based on wireless communication technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109935295A (en) * 2019-03-14 2019-06-25 福建乐摩物联科技有限公司 A kind of non-invasive human health screening system
CN110059576A (en) * 2019-03-26 2019-07-26 北京字节跳动网络技术有限公司 Screening technique, device and the electronic equipment of picture
CN110689938A (en) * 2019-11-18 2020-01-14 北京妙佳健康科技有限公司 Health monitoring all-in-one machine and health monitoring management system
US20200176116A1 (en) * 2018-11-30 2020-06-04 National Cheng Kung University Method of an interactive health status assessment and system thereof
CN111370124A (en) * 2020-03-05 2020-07-03 湖南城市学院 Health analysis system and method based on facial recognition and big data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200176116A1 (en) * 2018-11-30 2020-06-04 National Cheng Kung University Method of an interactive health status assessment and system thereof
CN109935295A (en) * 2019-03-14 2019-06-25 福建乐摩物联科技有限公司 A kind of non-invasive human health screening system
CN110059576A (en) * 2019-03-26 2019-07-26 北京字节跳动网络技术有限公司 Screening technique, device and the electronic equipment of picture
CN110689938A (en) * 2019-11-18 2020-01-14 北京妙佳健康科技有限公司 Health monitoring all-in-one machine and health monitoring management system
CN111370124A (en) * 2020-03-05 2020-07-03 湖南城市学院 Health analysis system and method based on facial recognition and big data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022178934A1 (en) * 2021-02-26 2022-09-01 平安科技(深圳)有限公司 Health testing method and apparatus, and device and storage medium
CN115190538A (en) * 2022-09-09 2022-10-14 朔至美(南通)科技有限公司 Health data transmission system and method based on wireless communication technology

Similar Documents

Publication Publication Date Title
US9795306B2 (en) Method of estimating blood pressure based on image
JP6072893B2 (en) Pulse wave velocity measurement method, measurement system operating method using the measurement method, pulse wave velocity measurement system, and imaging apparatus
CN111937082B (en) Guiding method and system for remote dental imaging
KR101426750B1 (en) System for mearsuring heart rate using thermal image
CN112382390A (en) Method, system and storage medium for generating health assessment report
CN104684479B (en) Body motion detection device and method
CN111933275A (en) Depression evaluation system based on eye movement and facial expression
CN106999062A (en) The method that heart information is extracted based on human body fine motion
EP3699929A1 (en) Patient weight estimation from surface data using a patient model
JP2015211829A (en) Determination of arterial pulse wave transit time from vpg and ecg/ekg signal
Pires et al. Wound area assessment using mobile application
US20200375491A1 (en) Heartbeat analyzing method and heartbeat analyzing method
Moya-Albor et al. A non-contact heart rate estimation method using video magnification and neural networks
CN116128814A (en) Standardized acquisition method and related device for tongue diagnosis image
Leli et al. Near-infrared-to-visible vein imaging via convolutional neural networks and reinforcement learning
CN115474951B (en) Method for controlling medical imaging examinations of a subject, medical imaging system and computer-readable data storage medium
KR102468648B1 (en) Method for calculating heart rate using rPPG signal of serial image and system thereof
JP5051025B2 (en) Image generating apparatus, program, and image generating method
Gurve et al. Electrocardiogram (ECG) image processing and extraction of numerical information
Takeuchi et al. A study on region of interest in remote ppg and an attempt to eliminate false positive results using svm classification
JP6501569B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
CN112053387A (en) Non-contact respiration monitoring method based on visual calculation
WO2018192246A1 (en) Contactless emotion detection method based on machine vision
Smiley et al. Contactless iPPG Extraction Using Infrared Imaging
WO2023195473A1 (en) Sleep state measurement system, sleep state measurement method, and sleep state measurement program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210226

Address after: 100176 room 701, 7 / F, building 1, yard 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing Jingdong tuoxian Technology Co.,Ltd.

Address before: Room A402, 4th floor, building 2, No.18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: BEIJING WODONG TIANJUN INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.