Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
FIG. 1 illustrates a flow diagram of a method of generating a wellness assessment report in accordance with some embodiments of the present disclosure. The method may be performed, for example, by a system that generates a health assessment report.
As shown in fig. 1, the method of this embodiment includes steps 120-160.
At step 120, sensory data relating to the health condition of the user is obtained.
The sensing data may be acquired from medical-grade smart body sensors such as an Electro-Car-Diogram (ECG) sensor, a photoplethysmography (PPG) sensor, a Bioelectrical Impedance Measurement (BIM) sensor, and the like, but is not limited to these examples. The sensors have the characteristics of low power consumption, small size, low cost and high precision, and can be widely applied to smart phones, smart watches, smart bracelets and smart scales, so that a system for generating health assessment reports can be conveniently acquired from the smart devices.
In some embodiments, the sensed data may also be processed by one or more of: the method comprises the steps of filtering power frequency interference of sensing data by using a power frequency trap, filtering electromyographic signal interference of the sensing data by using a filter (such as a Butterworth low-pass filter), performing signal gain amplification by using a signal amplifier, and converting a sensor analog signal into a digital signal by using a signal converter.
At step 140, an image of a body part of a user is acquired.
Capturing an image of a body part of a user includes capturing an image of a face, tongue, or other body part. For example, an industrial high-definition 1100-ten thousand-pixel camera module can be used for completing image acquisition of human body parts such as the face, the tongue and the like of a human body.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
And acquiring detection values of physiological indexes of the user, such as heart rate, electrocardiogram, respiratory rate, body fat rate, blood oxygen concentration, blood pressure and the like according to the sensing data, and analyzing the images of all body parts of the user through expert knowledge to obtain the evaluation scores of items of the user, such as the circulatory system, the respiratory system, the digestive system, the endocrine system, the immune system, the skeletal system, the skin system, the nutritional state and the like. Therefore, the health assessment report of the user may include, for example, the detected values of physiological indexes such as heart rate, electrocardiogram, respiratory rate, body fat rate, blood oxygen concentration, blood pressure, etc., the comprehensive assessment scores of circulatory system, respiratory system, digestive system, endocrine system, immune system, skeletal system, skin system, and nutritional status, and the health score, etc. of the user.
Furthermore, the health condition change can be determined and recorded in the health assessment report of the user not only from the latest sensing data and the image of the body part of the user, but also by combining the historical sensing data and the image of the body part of the user.
In addition, sensing data and images of body parts of the user can be collected regularly, and health assessment reports can be generated regularly, so that the user can know the health condition of the user in time.
In addition, the health assessment report can be generated and stored in an electronic mode, so that the user can conveniently search the health assessment report, an expert can conveniently analyze the change condition of the health condition according to the historical health condition of the user, and the user can be reminded in time when the health condition develops towards the deterioration direction.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, and compared with a mode of analyzing the health condition of the user only by means of the data acquired by the sensor, the health assessment result is more accurate and comprehensive. Meanwhile, the method of generating the health assessment report by this embodiment is cost-effective and time-saving compared to the way of acquiring the health assessment report by a conventional physical examination institution or hospital. The method of the embodiment can automatically generate the health assessment report, so that the user can conveniently acquire the comprehensive health condition of the user, and select whether to seek medical advice in the field or not according to the health assessment report.
Fig. 2 illustrates a flow diagram of a method of acquiring an image of a body part of a user, according to some embodiments of the present disclosure.
As shown in FIG. 2, the method of this embodiment is further described with respect to step 140 in the embodiment of FIG. 1, which includes steps 141 and 143.
At step 141, the coordinates of key points of the body part are detected.
The key point coordinates of the body part may be, for example, the coordinates of a key point of the face or tongue (e.g., upper left corner of the face), for example, labeled as D (x)d,yd) For example, the Multi-Task Convolutional Neural Network (MTCNN) algorithm may be used for the cameraAnd processing the acquired preview image to acquire the point coordinates of the upper left corner of the frame where the face of the user is located in real time.
In step 142, the position of the preview frame is adjusted according to the key point coordinates of the body part.
In some embodiments, adjusting the position of the preview box according to the key point coordinates of the body part comprises: determining a transformation coordinate for transforming the key point coordinate of the body part from a screen coordinate system to a preview coordinate system; and adjusting the position of the preview frame according to the transformed coordinates.
In step 143, an image of the body part is acquired with the image of the body part fully displayed in the preview frame.
In some embodiments, after the adjustment is completed, i.e., in a case where the image of the body part is completely displayed in the preview frame, the user is prompted by voice that the face is about to be photographed, after the photographing is completed, the user is prompted by voice to extend the tongue, the preview image is processed by using the tongue detection model, and when the tongue is detected to extend, the user is prompted by voice to remain still and complete the photographing. The collected facial and tongue photographs are then uploaded to a processor for processing, and a health assessment report is further generated.
According to the embodiment, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) of the user is completely displayed in the preview frame, the user can clearly know that the body part is collected, the situation that the position of the user is adjusted ceaselessly is avoided, the collection efficiency is improved, and meanwhile the photographing experience of users with different heights can be improved. Compared with a camera with a fixed physical shooting angle, the method of the embodiment can enable the body parts of users with different heights to be previewed normally in the preview frame. Compared with the method of dynamically adjusting the angle of the camera to realize normal preview of the body part of the user, the method needs to adopt the camera with the high price and the adjustable angle, and the equipment mold needs to be additionally provided with corresponding structural support, so that the cost is greatly reduced.
The method for adjusting the preview box will be described below by taking a user U1 with a low height and a user U2 with a high height as examples.
FIG. 3a illustrates a schematic view of a user U1 with a low height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
Because the physical shooting angle of the camera is fixed, if the traditional preview mode of the camera is adopted, the situation that the head portrait shown in the figure 3a appears below the preview area and cannot be normally previewed occurs when the height of the user is low, and the user experience of the user using the system is influenced.
Fig. 3b illustrates a schematic diagram of capturing the face of a user U1, according to some embodiments of the present disclosure.
As shown in FIG. 3B, the top left corner of the screen is the origin of coordinates, labeled as Point A (0, 0), and the camera preview box (coordinates in the top left corner are labeled as B (x)b,yb) Dot) is shown as a dashed-line box, the length and width of the preview box are 1/2 of the length and width of the full screen, respectively, and the black solid-line box inside the preview box is a bounding box that completely encloses the body part of the user (the coordinate of the upper left corner is marked as C (x)c,yc) Dots), the length and width of the bounding box are 1/4 of the length and width, respectively, of the full screen. Key point coordinates D (x) of the upper left corner of the face of user U1 are obtainedd,yd) D coordinates are coordinates in a preview coordinate system with point B as an origin, and point E (x)e,ye) Is a point coinciding with point C in a preview coordinate system with point B as an origin, e.g. the coordinate of point E may be determined from point D, e.g. x may be takene=xd-10,ye=yd-10. Assume that the pixel density of the screen is m and the pixel density of the preview image is n. For example, the screen pixel density m is 100, the image pixel density n of the preview frame is 300, and the coordinates of the point C at the upper left corner of the surrounding frame are (400, 300).
And (3) according to the principle that the head position of the user with different heights is dynamically adjusted to enable the head image to be always displayed at the central hollow part, namely, the coordinate of the B point is calculated by using the acquired coordinate of the D point to adjust the position of the camera preview frame. Since E and C are nearly coincident, there is m/n (x) according to the coordinate transformatione,ye)+(xb,yb)=(xc,yc) So that D point to B point can be obtainedCoordinate relationship of points, i.e. by (x)b,yb)=(xc,yc)-m/n(xe,ye) Resulting in a value of the transformed coordinate B.
When the height of the user U1 is 150cm, the D point coordinate is (610 ), then (x)e,ye)=(xd-10,yd-10)=(600,600)。
Calculating the coordinate (x) of the point B by the formulab,yb)=(xc,yc)-m/n(xe,ye) The position of the preview frame is adjusted according to the transformation coordinate B (200, 100), that is, the marginLeft of the preview frame is set to 200, and the marginTop of the preview frame is 100, that is, the preview frame can be adjusted to the position where the face image of the user U1 is displayed in its entirety.
FIG. 4a illustrates a schematic view of a user U2 with a high height whose face cannot normally be displayed in a preview box according to some embodiments of the present disclosure.
When the user is tall, the situation that the user cannot normally preview the preview area by pushing the head of the user against the top of the preview frame as shown in fig. 4a appears, and the user experience of using the system by the user is affected.
Fig. 4b shows a schematic diagram of capturing the face of a user U2, according to some embodiments of the present disclosure.
When the height of the user U2 is 180cm, for example, the D point coordinate is (610, 10), for example, (x)e,ye)=(xd-10,yd-10)=(600,0)。
Calculating the coordinate (x) of the point B by the formulab,yb)=(xc,yc)-m/n(xe,ye) The position of the preview frame is adjusted according to the transformation coordinate B (200, 300), that is, the marginLeft of the preview frame is set to 200, and the marginTop of the preview frame is 300, that is, the preview frame can be adjusted to the position where the face image of the user U2 is completely displayed (400, 300) -100/300(600, 0) ═ 200, 300).
FIG. 5 shows a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
As shown in fig. 5, the method of this embodiment includes steps 120-160.
The embodiment of fig. 5 differs from the embodiment of fig. 1 only in that step 150 is also included. The differences between fig. 5 and fig. 1 will be described in detail below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
At step 140, an image of a body part of a user is acquired.
In step 150, it is determined whether the variance of the image is greater than a preset threshold.
FIG. 6 illustrates a flow diagram for determining whether a variance of an image is greater than a preset threshold, according to some embodiments of the present disclosure.
As shown in fig. 6, step 150 includes 151-157.
In step 151, a frame extraction process is performed on the image of the body part of the user.
Assuming that the camera collects 30 frames of preview images per second, the frame extraction processing is performed on the 30 frames of preview images, and 10 frames of images are extracted to perform the processing of the subsequent step 152-157. The frame extraction processing can improve the calculation speed, thereby meeting the real-time requirement of image acquisition and improving the user experience.
In step 152, image reduction processing is performed on the body part of the user.
The reduction processing can improve the calculation speed, thereby meeting the real-time requirement of image acquisition and improving the user experience.
In step 153, the image of the user's body part is subjected to grayscale processing to obtain a grayscale image.
The gray processing can reduce the calculation amount, thereby improving the calculation speed, meeting the real-time requirement of image acquisition and improving the user experience.
At step 154, the grayscale image is convolved.
For example, the gray image may be convolved using the laplacian operator as a convolution kernel. Assuming that the convolution kernel matrix is represented as
The gray scale image matrix is expressed as
Firstly, the convolution kernel matrix is turned over horizontally and vertically to obtain a transformation matrix W 'corresponding to the convolution kernel, and the transformation matrix W' is expressed as
The transformation matrix W' is then convolved with the grayscale image matrix O to obtain a convolution result D, which is expressed as
That is to say that the first and second electrodes,
wherein the content of the first and second substances,
D11=O11×W22+O12×W21+O21×W12+O22×W11,
D12=O12×W22+O13×W21+O22×W12+O23×W11,
D21=O21×W22+O22×W21+O31×W12+O32×W11,
D22=O22×W22+O23×W21+O32×W12+O33×W11。
in step 155, the variance of the convolved gray image is calculated.
And calculating the variance according to all elements in the gray-scale image matrix after convolution processing.
And comparing the variance with a preset threshold, wherein if the variance is greater than the preset threshold, the definition of the image meets the preset requirement, and if the variance is not greater than the preset threshold, the definition of the image cannot meet the preset requirement, and the image needs to be acquired again.
In step 156, in the event that the variance is greater than a preset threshold, step 160 is performed.
In step 157, in case the variance is not greater than the preset threshold, step 155 is repeatedly performed.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the embodiment, whether the image to be acquired meets the preset definition requirement is determined by calculating the variance of the image of the body part of the user, so that a foundation is laid for generating a health assessment report, and the accuracy of the health assessment report can be improved.
FIG. 7 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with further embodiments of the present disclosure.
As shown in fig. 7, the method of this embodiment includes steps 120-160.
The embodiment of fig. 7 differs from the embodiment of fig. 1 only in that step 130 is also included. Only the differences between fig. 7 and fig. 1 will be described below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
In step 130, it is detected whether the left and right rotation angles of the body part of the user are within a preset range.
For example, coordinates of key points such as the left canthus, the right canthus, the nose tip, the left mouth corner, the right mouth corner, the lower jaw, and the left upper corner, the right upper corner, the center point of the tongue, 2/3 left coordinate of the tongue, 2/3 right tongue, etc. of the face may be obtained by using a key point detection method of an Open Source Computer Vision Library (openCV), and left and right rotation angles of the face and the tongue may be calculated respectively by using the openCV, and a preset range of left and right rotation angles of the face and the tongue may be set to be 15 ° or less, that is, if the calculated angle of the left rotation of the face or the tongue is greater than 15 °, it is determined that the user needs to be prompted to rotate the face or the tongue to the right, and if the calculated angle of the right rotation of the face or the tongue is greater than 15 °, it is determined that the user needs to rotate.
In case that the left and right rotation angles of the body part are within the preset range, step 140 is performed.
In case that the left rotation angle or the right rotation angle of the body part is not within the preset range, the user is prompted to rotate the body part, and step 130 is repeatedly performed.
At step 140, acquiring an image of a body part of a user;
at step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the above embodiment, whether the left rotation angle and the right rotation angle of the body part of the user are within the preset range or not is judged, the user can be automatically prompted to rotate the body part (for example, the face or the tongue) leftwards or rightwards, the user can clearly know whether the angle of the body part meets the requirement or not, the user is prevented from continuously adjusting the angle, the acquisition efficiency is improved, and meanwhile the photographing experience of the user can be improved.
FIG. 8 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with still further embodiments of the present disclosure.
As shown in fig. 8, the method of this embodiment includes steps 120-160.
The embodiment of fig. 8 differs from the embodiment of fig. 1 only in that step 130' is also included. Only the differences between fig. 8 and fig. 1 will be described below, and the same parts will not be described again.
At step 120, sensory data relating to the health condition of the user is obtained.
In step 130', it is calculated whether the ratio of the size of the bounding box enclosing the body part of the user to the size of the preview box is within a preset range.
Detecting the ratio of the size of an enclosing frame of a face or a tongue part enclosing the body part of the user to the size of a preview frame of the image by using an openCV target detection algorithm, for example, setting the preset range of the ratio to be 1/4-1/2, namely judging that the user is too far away from the camera when the ratio of the detection result is smaller than the minimum value of the preset range, and prompting the user to approach the camera; and when the ratio of the detection result is larger than the maximum value of the preset range, judging that the distance from the camera is too close, and prompting a user to keep away from the camera.
And in the case that the ratio is not within the preset range, prompting the user to approach or move away from the camera, and repeatedly executing the step 130'.
In the case that the ratio is within the preset range, step 140 is performed.
At step 140, an image of a body part of a user is acquired.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the above embodiment, can indicate the user to be close to or keep away from the camera automatically, make the user know clearly whether the distance of health position and camera satisfies the demands, avoid the user to adjust the position ceaselessly to improve collection efficiency, can also improve user's the experience of shooing simultaneously.
FIG. 9 illustrates a schematic diagram of a method of generating a wellness assessment report, in accordance with yet further embodiments of the present disclosure.
As shown in fig. 9, the method of this embodiment includes steps 120-160.
At step 120, sensory data relating to the health condition of the user is obtained.
The execution of step 130 and step 130' are not in sequential order. The only difference is that: if step 130 is performed first, step 130' is performed in case that the left and right rotation angles of the body part are within the preset range, and step 140 is performed in case that the ratio is within the preset range; if step 130' is performed first, step 130 is performed if the ratio is within the preset range; in case that the left and right rotation angles of the body part are within the preset range, step 140 is performed. A detailed description of the execution of step 130 is given below. An embodiment of performing step 130' first may be found with reference to this embodiment.
In step 130, it is detected whether the left and right rotation angles of the body part of the user are within a preset range.
In case that the left rotation angle or the right rotation angle of the body part is not within the preset range, the user is prompted to rotate the body part, and step 130 is repeatedly performed.
In case that the left and right rotation angles of the body part are within the preset range, step 130' is performed.
In step 130', it is calculated whether the ratio of the size of the bounding box enclosing the body part of the user to the size of the preview box is within a preset range.
And in the case that the ratio is not within the preset range, prompting the user to approach or move away from the camera, and repeatedly executing the step 130'.
In the case that the ratio is within the preset range, step 140 is performed.
At step 140, an image of a body part of a user is acquired.
In step 150, it is determined whether the variance of the image is greater than a preset threshold.
In case the variance of the image is larger than a preset threshold, step 160 is performed.
At step 160, a health assessment report for the user is generated based on the sensed data and the image of the user's body part.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, and compared with a mode of analyzing the health condition of the user only by means of the data acquired by the sensor, the health assessment result is more accurate and comprehensive. Whether the image is clear or not can be determined through calculation, so that the acquired image can meet the definition requirement. Meanwhile, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) is completely displayed in the preview frame, a user can clearly know that the body part is collected, the user is prevented from continuously adjusting the position, the collection efficiency is improved, and the photographing experience of users with different heights can be improved. In addition, the user can be automatically prompted to rotate the body part leftwards or rightwards and to approach or keep away from the camera, so that the user can definitely know whether the angle of the body part and the distance between the body part and the camera meet requirements, and the user is prevented from continuously adjusting the angle or the position, therefore, the acquisition efficiency is improved, and meanwhile, the photographing experience of the user can be improved.
FIG. 10 illustrates a schematic diagram of a system that generates a wellness assessment report, according to some embodiments of the present disclosure.
As shown in FIG. 10, the system 1000 for generating a health assessment report of this embodiment includes: the sensor 1010, camera 1020, and processor 1030, in some embodiments, also include a voice interaction module 1040.
In some embodiments, after the system is operated, the system is initialized, and the system initialization may complete initialization operations such as sensor self-test, network state self-test, server time synchronization, face recognition service initialization, and the like. And entering a login management interface after the initialization is successful, wherein the login management comprises two modes of face recognition login and user two-dimensional code scanning login, and obtaining the unique user identifier after the login is successful.
A sensor 1010 for acquiring sensory data related to the health condition of the user. The sensors include one or more of an Electrocardiography (ECG) sensor, a photoplethysmography (PPG) sensor, and a Bioelectrical Impedance Measurement (BIM) sensor.
The sensor 1010 further includes: one or more of a power frequency trap 1011, a filter 1012, a signal amplifier 1013, and a signal converter 1014. The power frequency wave trap 1011 is used for filtering power frequency interference of the sensing data; a filter 1012 for filtering electromyographic signal interference of the sensing data; a signal amplifier 1013, such as a butterworth low-pass filter, for performing signal gain amplification; the signal converter 1014 is configured to convert an analog signal of the sensing data into a digital signal.
A camera 1020 for capturing images of a body part of a user.
A processor 1030 configured to generate a health assessment report for the user based on the sensed data acquired by the sensor and the image captured by the camera.
The voice interaction module 1040 is configured to send a voice prompt to prompt the user to adjust the pose, for example, to prompt the user to rotate the body part, or to prompt the user to move closer to or farther away from the camera.
In the embodiment, the health assessment report of the user is generated by combining the sensing data of the user and the image of the body part of the user, so that the health assessment result is more accurate and comprehensive. Whether the image is clear or not can be determined through calculation, so that the acquired image can meet the definition requirement. Meanwhile, the position of the preview frame is adjusted, so that the image of the body part (the face or the tongue) is completely displayed in the preview frame, a user can clearly know that the body part is collected, the user is prevented from continuously adjusting the position, the collection efficiency is improved, and the photographing experience of users with different heights can be improved. In addition, the user can be automatically prompted to rotate the body part leftwards or rightwards and to approach or keep away from the camera, so that the user can definitely know whether the angle of the body part and the distance between the body part and the camera meet requirements, and the user is prevented from continuously adjusting the angle or the position, thereby improving the acquisition efficiency and improving the photographing experience of the user.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-non-transitory readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.