CN116503892A - Gesture judging method based on image and electronic equipment - Google Patents

Gesture judging method based on image and electronic equipment Download PDF

Info

Publication number
CN116503892A
CN116503892A CN202210073959.2A CN202210073959A CN116503892A CN 116503892 A CN116503892 A CN 116503892A CN 202210073959 A CN202210073959 A CN 202210073959A CN 116503892 A CN116503892 A CN 116503892A
Authority
CN
China
Prior art keywords
point
user
electronic device
human body
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210073959.2A
Other languages
Chinese (zh)
Inventor
马春晖
黄磊
陈霄汉
赵杰
魏鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210073959.2A priority Critical patent/CN116503892A/en
Priority to PCT/CN2023/070938 priority patent/WO2023138406A1/en
Publication of CN116503892A publication Critical patent/CN116503892A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The application provides an image-based gesture judging method and electronic equipment. The electronic device may determine human body keypoints and human body contours of the first object in the image. By combining the human body key points and the human body outline, the electronic equipment can judge whether the gesture of the user is a preset gesture. The method can improve the accuracy of gesture judgment.

Description

Gesture judging method based on image and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to an image-based gesture judging method and electronic equipment.
Background
In recent years, more and more office workers and students often sit for a long time and lack exercise due to the demands of work and study. This can easily lead to tissue tension in certain areas of muscle and reduced joint mobility, resulting in problems such as high and low shoulders, X and/or O-legs, humpback, scoliosis, and the like. Bad physical problems affect not only the personal image but also the physical health. Judging the gesture of the user can help the user to know whether the user has a physical problem or not and adjust the gesture in time.
Disclosure of Invention
The application provides an image-based gesture judging method and electronic equipment. The method can judge the gesture of the user by identifying the key points and the outline of the human body of the user in the image, and improves the accuracy of gesture judgment.
In a first aspect, the present application provides an image-based pose determination method. The method is applicable to electronic devices. The electronic device comprises a camera. The electronic device may acquire the first image through the camera. The electronic device may identify a first object from the first image and determine a first set of human keypoints for the first object. The electronic device may obtain a second image according to the first image and the first object, where a pixel value of a pixel of an area of the second image where the first object is located is different from a pixel value of other pixels in the second image. According to the first group of human body key points and the second image, the electronic equipment can judge whether the gesture of the first object is the first gesture.
It can be seen that the electronic device may perform binarization processing on the first image to obtain a second image, so as to distinguish an area where the first object is located from an area which does not include the first object in the first image. The second image may reflect an outline of the first object. The first object remains in a different pose and the contour of the first object is also different. Compared with the method for judging the gesture of the user only according to the human body key points, the electronic equipment can judge the gesture of the user by combining the first group of human body key points and the second image, so that the accuracy of gesture judgment can be improved.
In combination with the first aspect, in some embodiments, the specific method for determining whether the gesture of the first object is the first gesture according to the first set of human keypoints and the second image may be: the electronic device may determine a first human contour according to the second image, where the first human contour includes pixels on the second image that have a first pixel value and pixels on a junction of the pixels having the second pixel value, the pixels on the second image that have the first pixel value are pixels of an area in the second image where the first object is located, and the pixels on the second image that have the second pixel value are other pixels in the second image. According to the first set of human body keypoints and the first human body contour, the electronic device may determine whether the pose of the first object is a first pose.
The electronic device can determine the first human body outline from the second image, and determine whether the gesture of the user is the first gesture according to the first group of human body key points and the first human body outline. The location of each edge point on the first human contour is uniquely determined. The method can improve the accuracy of gesture judgment.
In combination with the first aspect, in some embodiments, the method for determining whether the gesture of the first object is the first gesture according to the first set of human body keypoints and the first human body contour may be: the electronic device may determine a first set of edge points on the first human contour according to the first set of human key points, and determine whether the gesture of the first object is a first gesture according to the first set of edge points.
In combination with the first aspect, in some embodiments, before the first image is acquired by the camera, the electronic device may further display a first interface, where the first interface includes first prompt information, and the first prompt information is used to prompt the first object to keep the first posture.
In combination with the first aspect, in some embodiments, after determining whether the gesture of the first object is the first gesture according to the first set of human body keypoints and the second image, the electronic device may further display a second interface, where the second interface includes second prompt information, where the second prompt information is determined according to a first difference between the gesture of the first object and the first gesture, and the second prompt information is used to prompt the first object to eliminate the first difference.
As can be seen from the above embodiments, the electronic device may display related prompt information on the interface to prompt the first object to maintain the first posture. When the gesture of the first object is judged not to be the first gesture, the electronic device can guide the user to complete the action corresponding to the first gesture according to the difference between the gesture of the first object and the first gesture, so that the first gesture is maintained. The method can help the user maintain the first posture.
Optionally, the processing displays related prompt information on the interface, and the electronic device may prompt the user to adjust the gesture by means of voice prompt or the like, so as to complete the action corresponding to the first gesture.
In combination with the first aspect, in some embodiments, the electronic device may further display a third interface prior to displaying the first interface, the third interface including the first option. The electronic device receives a first operation on the first option. The first interface may be displayed by the electronic device based on the first operation.
It can be seen that the electronic device may prompt the user to maintain the first gesture based on the user's selection of the first option.
For example, the first option described above may be an option for assessing one or more of high and low shoulders, X-leg, O-leg, XO-leg, scoliosis, or other morphological problems. In response to operation of the first option, the electronic device may display the first interface to prompt the user to maintain the first gesture. The first posture may for example be a posture facing towards the camera (i.e. the body facing towards the camera) with the arms placed on both sides of the body and at a distance from the body, with the legs standing naturally and being closed.
As another example, the first option described above may be an option for assessing humpback. In response to operation of the first option, the electronic device may display the first interface to prompt the user to maintain the first gesture. The first posture may be, for example, a posture in which the body side faces the camera, standing naturally.
That is, the user may select an item for morphological evaluation. The electronic device may instruct the user to perform the corresponding action based on the user's selection. Thus, when the user selects the estimated posture problem and only needs to keep one gesture, the user can only complete the action corresponding to the gesture, and does not need to complete the action needed to be completed for estimating other posture problems.
In combination with the first aspect, in some embodiments, in a case where the gesture of the first object is determined to be the first gesture, the electronic device may determine whether the first object has the first body problem according to the first set of human body key points and the second image.
The electronic device may determine a first human body contour of the first object according to the second image. According to the first group of human body key points and the first human body outline, the electronic equipment can judge whether the first object has the first body state problem.
In some embodiments, the electronic device may determine a first set of edge points on the first human contour according to the first set of human key points, and determine whether the first object has a first body problem according to the first set of edge points.
It will be appreciated that the location of the edge points on the first human contour is determined. The electronic equipment judges whether the gesture of the user is the gesture with the posture problem by utilizing the first group of edge points, so that whether the user has the posture problem can be more accurately judged. This may help the user to better understand his posture and correct the posture problem in time.
In combination with the first aspect, in some embodiments, the electronic device may further display a fourth interface, where the fourth interface includes third prompting information, and the third prompting information is used to prompt the first object to maintain the second gesture.
Wherein the third interface further includes a second option. The electronic device also receives a second operation on the second option before displaying the fourth interface. The fourth interface may be displayed by the electronic device based on the second operation.
Optionally, in the case that the gesture of the first object is determined to be the first gesture, the electronic device may directly display the fourth interface.
In combination with the first aspect, in some embodiments, the electronic device may further acquire a third image via the camera after displaying the fourth interface. The electronic device may identify the first object from the third image and determine a second set of human keypoints for the first object. According to the second set of human body key points, the electronic device can determine whether the gesture of the first object is a second gesture. The second pose is different from the first pose.
In combination with the first aspect, in some embodiments, the electronic device may display a fifth interface if it is determined that the gesture of the first object is not the second gesture. The fifth interface may include a fourth prompt. The fourth hint information may be determined based on a second difference between the gesture of the first object and the second gesture. The second prompt message is used for prompting the first object to eliminate the second difference.
In combination with the first aspect, in some embodiments, in a case where it is determined that the pose of the first object is the second pose, the electronic device may obtain a fourth image according to the third image and the first object, where a pixel value of a pixel in an area where the first object is located in the fourth image is different from a pixel value of other pixels in the fourth image. According to the second group of human body key points and the fourth image, the electronic equipment can judge whether the first object has a second state problem, and the second state problem is different from the first state problem.
It can be seen that the electronic device may determine the pose of the first object in combination with the second set of human keypoints and the fourth image, thereby determining whether the first object has a physical problem. This can improve the accuracy of the posture evaluation result. Thereby helping the user to better understand his posture and correct the posture problem in time.
With reference to the first aspect, in some embodiments, the electronic device may determine the second human body contour according to the fourth image. The second human body contour includes pixels at intersections of pixels having the first pixel values and pixels having the second pixel values on the fourth image. The pixels in the fourth image having the first pixel values are pixels of an area in the fourth image where the first object is located. The pixels in the fourth image having the second pixel values are other pixels in the fourth image (i.e., pixels of the fourth image that do not include the region of the first object). The electronic device may determine whether the first object has a second morphological problem according to the second set of human body keypoints and the second human body contour.
In combination with the first aspect, in some embodiments, the electronic device may determine a second set of edge points on the second human body contour according to the second set of human body key points, and determine whether the first object has a second morphological problem according to the second set of edge points.
With reference to the first aspect, in some embodiments, the first gesture includes: the arms are placed on both sides of the body and are spaced from the waist by a distance within a first distance range. The first set of human keypoints comprises: a first left wrist point and a first right wrist point. The specific method for judging whether the gesture of the first object is the first gesture according to the first group of human body key points and the first human body outline may be: the electronic device may determine whether the first straight line where the first left wrist point and the first right wrist point are located has 6 intersection points with the first human contour. Wherein 6 points of intersection of the first straight line and the first human body outline indicate that the user's arms are positioned on both sides of the body and the distance to the waist is within a first distance range.
With reference to the first aspect, in some embodiments, the first gesture includes: the legs are naturally upright and closed. The first set of human keypoints comprises: a first left hip point, a first left knee point, a first left ankle point, a first right hip point, a first right knee point, and a first right ankle point. The specific method for judging whether the gesture of the first object is the first gesture according to the first group of human body key points and the first human body outline may be: the electronic device may determine whether a first distance obtained by subtracting the third left leg distance from the first left leg distance plus the second left leg distance is less than a first threshold according to a first left leg distance between the first left hip point and the first left knee point, a second left leg distance between the first left knee point and the first left ankle point, and a third left leg distance between the first left hip point and the first left ankle point. The electronic device may determine whether a second distance obtained by subtracting the third right leg distance from the first right leg distance plus the second right leg distance is less than a first threshold according to a first right leg distance between the first right hip point and the first right knee point, a second right leg distance between the first right knee point and the first right ankle point, and a third right leg distance between the first right hip point and the first right ankle point. Wherein the first distance and the second distance being smaller than the first threshold value indicate that the legs of the user are naturally upright.
Further, the electronic device may determine whether the first line segment between the first left knee point and the first right knee point has 2 intersections with the first human contour. When the first line segment has 2 intersection points with the first human body contour, the electronic device may determine whether a third distance between the first line segment and the 2 intersection points of the first human body contour is smaller than a second threshold. The electronic device may determine whether the second line segment between the first left ankle point and the first right ankle point has 2 intersections with the first human contour. When the second line segment has 2 intersection points with the first human body contour, the electronic device may determine whether a fourth distance between the second line segment and the 2 intersection points of the first human body contour is smaller than a third threshold. The intersection point of the first line segment and the first human body contour is less than 2, or the third distance is less than a second threshold, or the intersection point of the second line segment and the first human body contour is less than 2, or the fourth distance is less than a third threshold, which can indicate that the legs of the user are folded.
In some embodiments, when it is determined that the number of intersections between the first line segment and the first human body contour is less than 2, or it is determined that the number of intersections between the first line segment and the first human body contour is 2 and a third distance between the 2 intersections is less than a second threshold, the electronic device may determine that the legs of the user have been closed. For example, when the legs of the user of any one of the normal leg type, the XO type and the X type are closed, the condition that the number of intersections between the first line segment and the first human contour is less than 2 or the third distance is less than the second threshold is satisfied. Therefore, the electronic equipment can not judge whether the intersection point exists between the second line segment and the first human body outline. This can improve the efficiency of judgment.
When the third distance is determined to be greater than or equal to the second threshold, the electronic device may determine whether the legs of the user are folded by using the first left ankle point and the first right ankle point. For example, when the legs are closed, the user with the O-shaped legs does not satisfy the condition that the number of intersections of the first line segment and the first human body contour is less than 2 or the third distance is less than the second threshold, but satisfies the condition that the number of intersections of the second line segment and the first human body contour is less than 2 or the fourth distance is less than the third threshold.
In some embodiments, the electronic device may first determine whether there are 2 intersections between the second line segment and the first human body contour. In the case that there are 2 intersections of the second line segment and the first human contour, the electronic device may determine whether a fourth distance between the two intersections is smaller than a third threshold. If the fourth distance is greater than or equal to the third distance, the electronic device may determine whether there are 2 intersections between the first line segment and the first human contour. In the case that there are 2 intersections of the first line segment and the first human contour, the electronic device may determine whether a third distance between the two intersections is smaller than a second threshold.
In some embodiments, the electronic device may further determine whether the body of the user faces the camera according to the first set of human body key points. In particular, the first set of human keypoints may comprise a first left hip point and a first right hip point. The electronic device may determine whether a distance between the first left hip point and the first right hip point is greater than a preset distance threshold. The distance between the first left hip point and the first right hip point being greater than a preset distance threshold may indicate that the body of the user is facing the camera.
With reference to the first aspect, in some embodiments, the first state problem may include one or more of: high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs and scoliosis.
In combination with the first aspect, in some embodiments, the first set of human keypoints includes a first left shoulder point and a first right shoulder point, the first set of edge points includes a first left shoulder edge point and a first right shoulder edge point, the first left shoulder edge point is determined from a left shoulder region of the first left shoulder point on the first human contour, and the first right shoulder edge point is determined from a right shoulder region of the first right shoulder point on the first human contour.
In one possible implementation, the electronic device may determine the locations of the first set of human keypoints and edge points (i.e., pixels) on the first human contour in the same pixel coordinate system. The electronic device may determine, on a straight line a passing through the first left shoulder point and perpendicular to the horizontal line, an edge point on the first human contour that is greater in value on the longitudinal axis (i.e., the y-axis) of the pixel coordinate system than the first left shoulder point and closest to the first left shoulder point. The electronic device may determine the one edge point as a first left shoulder edge point. The direction of the straight line a may be the y-axis direction. Similarly, the electronic device determines, on a straight line B passing through the first right shoulder point and perpendicular to the horizontal line, an edge point on the first human contour, where the value on the y-axis of the pixel coordinate system is greater than the value on the y-axis of the first right shoulder point, and is closest to the first right shoulder point. The electronic device may determine the one edge point as a first right shoulder edge point.
The specific method for judging whether the first object has the first body state problem according to the first group of human body key points and the second image may be as follows: and determining a first included angle between a straight line where the first left shoulder edge point and the first right shoulder edge point are located and a horizontal line, and judging whether the first included angle is larger than a first angle threshold value. The first angle being greater than the first angle threshold may indicate that the user has a high or low shoulder.
According to the embodiment, the electronic device can determine the edge points on the shoulders of the human body contour according to the human body key points, and compare the directions of the shoulders of the user for body posture assessment with the directions of the shoulders of the human body in a normal body posture by using the edge points to obtain the high-low shoulder assessment result of the user. Because the positions of the edge points are determined, the method can reduce the influence of the detected position floating of the key points of the human body on the high-low shoulder evaluation result and improve the accuracy of the high-low shoulder evaluation result.
In combination with the first aspect, in some embodiments, the first set of human body keypoints comprises a first left knee point, a first right knee point, a first left ankle point, and a first right ankle point, the first set of edge points comprises a first left knee inner edge point, a first right knee inner edge point, a first left foot inner edge point, a first right foot inner edge point, and M pairs of calf inner edge points; the first left knee inner edge point and the first right knee inner edge point are intersection points of a line segment between the first left knee point and the first right knee point and the first human body contour, and the first left foot inner edge point and the first right foot inner edge point are intersection points of a line segment between the first left ankle point and the first right ankle point and the first human body contour; a pair of inner leg edge points of the M pairs of inner leg edge points includes a left inner leg edge point and a right inner leg edge point located at the same height, the left inner leg edge point being a pixel between a first left knee inner edge point and a first left foot inner edge point on the first human body contour, the right inner leg edge point being a pixel between a first right knee inner edge point and a first right foot inner edge point on the first human body contour, and M being a positive integer.
The specific method for judging whether the first object has the first body state problem according to the first group of human body key points and the second image may be as follows: a fifth distance between the first left knee inner edge point and the first left foot inner edge point and a sixth distance between the first left knee inner edge point and the first right knee inner edge point are determined, and whether a first ratio of the sixth distance to the fifth distance is greater than a fourth threshold is determined. The first ratio being greater than a fourth threshold may indicate that the user is an O-leg.
And determining a seventh distance between the first left foot inner edge point and the first right foot inner edge point if the first ratio is less than or equal to the fourth threshold, and determining whether a second ratio of the seventh distance to the sixth distance is greater than a fifth threshold. The second ratio being greater than the fifth threshold may indicate that the user is an X-leg.
And if the second ratio is less than or equal to the fifth threshold, judging whether the ratio of the distance between any pair of the inner-calf edge points in the M pairs of inner-calf edge points to the sixth distance is greater than the sixth threshold. The ratio of the distance between any pair of inner edge points of the M pairs of lower legs to the sixth distance is greater than a sixth threshold, which may indicate that the user is an XO-type leg. Otherwise, the electronic device may determine that the user's leg type is a normal leg type.
It is understood that if the line segment between the first left knee point and the first right knee point has no intersection with the first human contour, the value of the sixth distance may be 0. If there is no intersection point between the silna segment between the first left ankle point and the first right ankle point and the first human contour, the seventh distance may have a value of 0.
As can be seen from the above embodiments, in evaluating the user's leg shape, the electronic device may normalize the distance between edge points inside the knee, the distance between edge points of the ankle inside the feet, and the distance between edge points inside the left and right lower legs. I.e. the user's leg shape is determined by the ratio of the distance to the sixth distance. The normalization process described above may enable the method of assessing leg types described above to be adapted to different users. In general, if the legs of two users are O-shaped legs of the same degree, the longer the legs, the greater the distance between the knees of the user. If the legs of two users are of the same degree of X-shaped legs, the longer the legs, the greater the distance between the ankles in the feet of the user. If the legs of two users are similar XO-type legs, the longer the legs are, the greater the distance between the left and right calf is. The method can evaluate the leg type of the user more accurately.
In combination with the first aspect, in some embodiments, the first set of human keypoints comprises a first left shoulder point, a first right shoulder point, a first left hip point, and a first right hip point, the first set of edge points comprises a first left shoulder edge point, a first right shoulder edge point, a first left hip edge point, and a first right hip edge point, the first left shoulder edge point is determined from a left shoulder region of the first left shoulder point on the first human contour, the first right shoulder edge point is determined from a right shoulder region of the first right shoulder point on the first human contour, the first left hip edge point is determined from a left waist region of the first left hip point on the first human contour, and the first right hip edge point is determined from a right waist region of the first right hip point on the first human contour. The specific method for judging whether the first object has the first body state problem according to the first group of human body key points and the second image may be as follows: an eighth distance between the first left shoulder edge point and the first left hip edge point is determined, and a ninth distance between the first right shoulder edge point and the first right hip edge point is determined. It is determined whether a third ratio of a smaller one of the eighth distance and the ninth distance to a larger one is greater than a seventh threshold. The third ratio being greater than a seventh threshold may indicate that the user has scoliosis.
With reference to the first aspect, in some embodiments, the first state problem includes: humpback.
With reference to the first aspect, in some embodiments, the first set of edge points includes edge points that make up a first curved segment of the upper back region of the first human contour. The specific method for judging whether the first object has the first body state problem according to the first group of human body key points and the second image may be as follows: and determining a first area of a closed area formed by the first straight line section and the first curve section between two end points of the first curve section, determining a second area of a circle taking the first straight line section as a diameter, and judging whether a fourth ratio of the first area to the second area is larger than an eighth threshold value. The fourth ratio being greater than the eighth threshold may indicate a humpback of the user. Or, performing smoothing treatment on the first curve segment to obtain a first smooth curve segment, and judging whether the maximum curvature of the first smooth curve segment is larger than a ninth threshold value. A maximum area of the first smooth curve segment above a ninth threshold may indicate a humpback of the user.
With reference to the first aspect, in some embodiments, in determining the first curve segment, the electronic device may determine an orientation of a body of the first object. For example, the electronic apparatus may determine the orientation of the body of the first subject according to the magnitudes of the values of the first left ankle point and the first right ankle point on the y-axis of the pixel coordinate system. If the y value of the left ankle point is smaller than the y value of the right ankle point, the electronic device may determine that the body of the user faces the negative direction of the x-axis in the pixel coordinate system. If the y value of the left ankle point is greater than the y value of the right ankle point, the electronic device can determine that the body of the user faces the positive direction of the x-axis in the pixel coordinate system. Further, the electronic device may determine the first curve segment at a back region of the first human contour based on the first set of keypoints.
From the above embodiments, humpback is mainly reflected in the back bulge of the user. The electronic device can distinguish the chest area and the back area in the human body outline, and then select edge points in the back area to judge whether the user humps or not. The method can avoid confusion of the edge points of the chest area and the back area in the judging process, and improves the accuracy of humpback detection results. And the position of the edge point of the back area on the human body contour is determined, and whether the user humps or not and the severity of humpback can be more accurately judged by utilizing the edge point of the back area.
In a second aspect, the present application provides an electronic device, which may comprise a camera, a memory and a processor, wherein the camera may be used for capturing images, the memory may be used for storing a computer program, and the processor may be used for invoking the computer program to cause the electronic device to perform the method as possible in any of the first aspects.
In a third aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform any one of the possible implementation methods as in the first aspect.
In a fourth aspect, the present application provides a computer program product which may contain computer instructions which, when run on an electronic device, cause the electronic device to perform any one of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a chip for application to an electronic device, the chip comprising one or more processors for invoking computer instructions to cause the electronic device to perform any of the possible implementations of the first aspect.
It will be appreciated that the electronic device provided in the second aspect, the computer readable storage medium provided in the third aspect, the computer program product provided in the fourth aspect, and the chip provided in the fifth aspect are all configured to perform the method provided by the embodiments of the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
FIG. 1 is a schematic illustration of a different leg style provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a human body key point provided in an embodiment of the present application;
fig. 3A and fig. 3B are scene diagrams for determining high and low shoulders according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application;
fig. 5 is a software block diagram of an electronic device 100 according to an embodiment of the present application;
FIG. 6A is a flowchart of a method for an electronic device 100 to acquire a posture assessment image according to an embodiment of the present application;
fig. 6B is a schematic diagram of image processing performed by the electronic device 100 according to the embodiment of the present application;
FIG. 6C is a diagram of a binarized image according to an embodiment of the present application;
FIG. 6D is a schematic diagram of determining whether a gesture of a user is a posture assessment gesture according to an embodiment of the present application;
fig. 7A-7D are schematic views of some electronic devices 100 according to embodiments of the present application, which prompt a user to adjust a gesture;
fig. 8A to fig. 8I are schematic views of some scenarios for performing posture assessment on a user according to embodiments of the present application;
FIG. 9 is a schematic diagram of another embodiment of determining whether a gesture of a user is a posture assessment gesture;
FIGS. 10A-10C are schematic diagrams illustrating a scenario in which other electronic devices 100 provided in embodiments of the present application prompt a user to adjust a gesture;
FIGS. 11A and 11B are schematic views of other scenarios for performing a posture assessment of a user according to embodiments of the present application;
Fig. 12A to 12E are schematic views of a user interface for some of the posture evaluation results provided in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the various embodiments herein below, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Due to the demands of work, study and the like, more and more people have the conditions of long-term sedentary sitting, improper sitting posture, lack of exercise and the like. The above conditions tend to easily lead to problems with high and low shoulders, X and/or O-legs, humpback, scoliosis, and the like. The embodiment of the application provides an image-based gesture judging method for helping a user to know own posture problems. Thus, the user can correct his own physical problems in time, thereby keeping good physical health.
For ease of understanding, some of the physical problems to which this application relates are first described herein.
1. High-low shoulder
The high and low shoulders are the phenomenon that the heights of the left shoulder and the right shoulder of the human body are different. The shoulder heights of a user with normal posture are generally the same. When in a natural standing state, if the left shoulder of the user is higher than the right shoulder or the right shoulder is higher than the left shoulder, the user has a physical problem of high and low shoulders. Incorrect backpack postures or long-term single shoulder bag may result in high and low shoulders.
2. X-shaped leg, O-shaped leg and XO-shaped leg
Referring to fig. 1, fig. 1 is a schematic diagram of a normal leg type, an X-type leg, an O-type leg, and an XO-type leg.
As shown in fig. 1, in the case where the legs are naturally erected and closed, the line shape of the legs may reflect the leg shape of the user. If the user is normal leg type, under the condition that the legs are naturally upright and close, the knees of the user can be close, the ankles in the feet can be close, the lines of the legs are upright and close to each other, and partial areas on the inner sides of the left calf and the right calf can be close.
If the user is an X-shaped leg, under the condition that the legs are naturally upright and are closed, the knees of the user can be close to each other, the ankles in the feet are separated and cannot be close to each other, and the lines of the left leg and the right leg are in an X shape. The X-leg is also known as "knee eversion". Generally, the ankle spacing in the bipedal foot of greater than 1.5 cm can be considered an X-leg. The above-described approach to the knees may include the medial sides of the knees touching, the distance between the medial sides of the knees being particularly small (e.g., less than a preset distance threshold).
If the user is an O-shaped leg, under the condition that the legs are naturally upright and are closed, the ankle in the feet of the user can be close to each other, the knees are separated and cannot be close to each other, and the lines of the left leg and the right leg are in an O shape. The O-leg is also known as "knee varus".
If the user is an XO type leg, under the condition that the legs are naturally upright and are closed, the knees of the user can be close to each other, the ankles in the feet can be close to each other, but the left calf and the right calf are separated and can not be close to each other, and the calf is in an eversion form.
Improper sitting posture (e.g., raising the legs of the girl, etc.), improper walking posture (e.g., walking outside eight, walking inside eight, etc.), long-term wearing of high-heeled shoes, etc., may cause deformity of the legs, which may occur in the X-type legs or the O-type legs or the XO-type legs.
3. Scoliosis (scoliosis)
Scoliosis refers to a lateral (e.g., left or right) curvature of a person' S spine, with curved shapes including S-shapes and C-shapes. The spine of a user who is physically normal is generally upright, as viewed from the front or back of the user.
4. Humpback
Humpback refers to a morphological change caused by the posterior process of the thoracic spine, a scene of spinal deformity. Humpbacks are characterized by a convex back portion that is excessively curved in the direction of the back portion.
The posture problems can also include neck extension, pelvic forward leaning, long and short legs, knee extension, and the like without being limited to the high and low shoulders, the X-shaped legs, the O-shaped legs, the XO-shaped legs, scoliosis and humpback.
In one possible implementation, the electronic device may determine the gesture of the user by identifying human keypoints. The key points of the human body can also be called human skeleton nodes, human skeleton points and the like. Fig. 2 schematically illustrates a position distribution diagram of human body key points provided in an embodiment of the present application. As shown in fig. 2, the human body key points may include: head point, neck point, left shoulder point, right elbow point, left elbow point, right wrist point, left wrist point, abdomen point, right hip point, left and right hip intermediate point, right knee point, left knee point, right ankle point, left ankle point. The method is not limited to the above-mentioned human body key points, and other human body key points may be included in the embodiments of the present application.
The human body key points may be two-dimensional (2 d) key points or three-dimensional (3 d) key points. The 2D keypoints may represent keypoints that are distributed over a 2D plane. The 2D plane may be an image plane in which an image for performing human body key point recognition is located. The above-described 3D keypoints may represent keypoints distributed in 3D space. The 3D keypoints also include depth information compared to the 2D keypoints. The depth information can reflect the distance degree of key points of the human body relative to a camera for collecting images.
The implementation method for identifying the key points of the human body by the electronic equipment is not limited. For example, the electronic device may utilize a human keypoint detection model to identify human keypoints. The human body key point detection model may be a neural network-based model. An image containing a portrait is input into a human body key point detection model, and the human body key point detection model can output the position information of the human body key points of the portrait on the image.
The human body key points may reflect the posture of the human body. When a user has one or more of the posture problems in the above embodiments, the posture of the user is generally different from that of a human body in a normal posture. The normal human body may be a human body without such problems as high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs, scoliosis, humpback and the like.
Then, the electronic device can identify the key points of the user's human body by collecting the image of the user in a state of standing naturally and the legs are closed. According to the human body key points, the electronic equipment can judge whether the gesture of the user is the gesture of the specific posture problem. The image of the user identifying the key points of the human body may include a front view and a side view of the user. The front view may be an image acquired by the camera when the user faces the camera. The side view may be an image acquired by the camera when the direction of the body plane of the user and the direction of the optical axis of the camera are smaller than a preset angle (such as 5 °). The value of the preset angle is not limited in the embodiment of the application.
For example, the electronic device may determine whether the user has a high or low shoulder according to the left shoulder point and the right shoulder point of the user. If the height difference between the left shoulder point and the right shoulder point is greater than a preset high-low shoulder threshold (e.g., 0.5 cm, etc.), the electronic device may determine that the user has a high-low shoulder. The embodiment of the application does not limit the high-low shoulder threshold.
The electronic device can judge whether the user has the X-shaped leg according to the distance between the left ankle point and the right ankle point. If the distance between the left ankle point and the right ankle point is greater than the preset X-leg threshold (e.g., 1.5 cm, etc.), the electronic device may determine that the user has an X-leg. The embodiment of the application does not limit the above-mentioned X-leg threshold.
The electronic device can judge whether the user has the O-shaped leg according to the distance between the left knee point and the right knee point. If the distance between the left knee point and the right knee point is greater than a preset O-leg threshold (e.g., 2 cm, etc.), the electronic device may determine that the user has an O-leg. The embodiments of the present application do not limit the above-mentioned O-leg threshold.
If the distance between the left ankle point and the right ankle point of the user is smaller than or equal to the preset X-type leg threshold, and the distance between the left knee point and the right knee point of the user is smaller than or equal to the preset O-type leg threshold, the electronic device may determine whether the user has an XO-type leg according to the distance between the line segment obtained by connecting the left knee point with the left ankle point and the line segment obtained by connecting the right knee point with the right ankle point. If the minimum distance between the line segment obtained by connecting the left knee point and the left ankle point and the line segment obtained by connecting the right knee point and the right ankle point is greater than a preset XO-type leg threshold (e.g., 2 cm, etc.), the electronic device may determine that the user has an XO-type leg. The above XO-leg threshold is not limited in the embodiments of the present application.
The electronic equipment can judge whether the user has scoliosis according to the connecting line of the neck point and the abdomen point and the connecting line of the abdomen point and the middle points of the left hip and the right hip. If the included angle between the connection line of the neck point and the abdomen point and the connection line of the abdomen point and the middle point of the left hip and the right hip is smaller than a preset scoliosis threshold (such as 175 degrees), the electronic equipment can determine that the user has scoliosis. The scoliosis threshold is not limited in the embodiments of the present application.
The electronic device can judge whether the user humpbacks or not according to the human body key points of the user on the side view. Specifically, the electronic device may identify a neck point and a back point of the user from the side view of the user. The electronic device may determine whether an included angle between a line connecting the neck point and the back point and a straight line in a vertical direction is greater than a preset humpback threshold (e.g., 10 °), etc. If the included angle between the line between the neck point and the back point and the straight line in the vertical direction is greater than the humpback threshold, the electronic device may determine that the user humpbacks. The humpback threshold is not limited in the embodiments of the present application.
But the various parts of the human body are not exact points. The human body key points are used for representing the parts of the human body, and the positions of the human body key points can float in a certain range of areas. This results in a large error in the determination of the user's posture using the human keypoints. Then, judging whether the user has a posture problem or not (i.e. performing posture evaluation on the user) according to the posture of the user determined by the key segment has a larger error.
For example, as shown in fig. 3A, the electronic device detects an image including the human body 1 by using a human body key point detection model, and obtains a left shoulder point and a right shoulder point of the human body 1. Wherein the left shoulder point may represent a portion of the left shoulder of the human body 1. The right shoulder point may represent a portion of the right shoulder of the human body 1. Any one point in the region 1 of the human body 1 on the image can be used for representing a left shoulder point, and any one point in the region 2 can be used for representing a right shoulder point. I.e. the position of the left shoulder point identified by the electronic device may float in zone 1 and the position of the right shoulder point may float in zone 2.
For example, the positions of the left shoulder point and the right shoulder point of the human body 1 identified by the electronic device are shown in fig. 3A. In fig. 3A, the left shoulder point is the same height as the right shoulder point. The electronic device can determine that the person 1 does not have a high or low shoulder.
For another example, the electronic device recognizes the positions of the left shoulder point and the right shoulder point of the human body 1 as shown in fig. 3B. In fig. 3B, the left shoulder point is different in height from the right shoulder point. Wherein, the right shoulder point is higher than the left shoulder point, and the contained angle between the connecting line of the right shoulder point and the left shoulder point and the horizontal line is theta 1. The electronic device can determine that the human body 1 has a high-low shoulder and the right shoulder is higher than the left shoulder.
As can be seen from the above embodiments, the electronic device performs the posture evaluation on the same user by using the key points of the human body, and may obtain different results. It is possible that the user has a physical problem, but because the position of the key points of the human body is floating, the electronic device may determine that the user has no physical problem according to the key points of the human body. It is possible that the user does not have a physical problem, but because the position of the human body key point is floating, the electronic device may determine that the user has a physical problem according to the human body key point. The errors brought by the gesture judging method are difficult to help users to accurately know own posture and correct posture problems in time.
The application provides an image-based attitude judgment method. In the method, the electronic equipment can acquire the image of the user and determine the key points and the contours of the human body of the user according to the image. The electronic device can judge whether the gesture of the user is a preset gesture by combining the human body key points and the human body outline. Compared with the method for judging the gesture by only using the key points of the human body, the method can improve the accuracy of gesture judgment.
The preset gesture may be a specific posture evaluation gesture. The electronic equipment can determine the gesture of the user according to the key points of the human body and the edge points on the human body outline, and guide the user to complete the action corresponding to the gesture of the gesture evaluation according to the difference between the gesture of the user and the gesture of the gesture evaluation. The electronic equipment can perform image recognition on the image containing the posture assessment posture of the user to obtain the human body key points and the human body contours of the posture assessment posture of the user. The electronic equipment can judge whether the posture of the user is the posture with the posture problem by utilizing the human body key points and the human body outline of the posture evaluation posture maintained by the user, so as to realize the posture evaluation of the user.
Although the positions of the key points of the human body are floating, the positions of the edge points on the contour of the human body are uniquely determined. The user's body contour may reflect the user's pose. If the user has a posture problem, the posture of the user is usually different from that of a human body in a normal posture. Then, the method for judging the posture by combining the human body key points and the human body contours can improve the accuracy of the posture evaluation result. Thereby helping the user to better understand his posture and correct the posture problem in time.
It can be understood that determining the key points and the contours of the human body of the user based on the image to determine whether the posture of the user is a posture with posture problems (such as a posture with high and low shoulders, a posture with X-shaped legs, etc.), is equivalent to performing posture evaluation on the user. Namely, the image-based posture judging method provided by the application comprises a posture evaluating method. In the subsequent embodiments of the present application, implementation procedures of the posture evaluation method will be specifically described.
The following describes an electronic device according to an embodiment of the present application.
Referring to fig. 4, fig. 4 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, human body key point detection, portrait segmentation, voice recognition, text understanding and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals.
The earphone interface 170D is used to connect a wired earphone.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100.
The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is for detecting temperature.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
The electronic device 100 may be a mobile phone, a television, a tablet computer, a notebook computer, a desktop computer, a western-style uplink computer, a handheld computer, a personal digital assistant (personal digital assistant, PDA), an artificial intelligence (artificial intelligence, AI) device, or the like. The embodiment of the application does not particularly limit the specific type of the electronic device.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 5 is a software configuration block diagram of the electronic device 100 of the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 5, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 5, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Electronic devices evaluate the user's posture by capturing images, often requiring the user to maintain a natural standing and leg-closed position. The user can better reflect the physical problem of the user under the condition of being in the gesture. For example, if the user intentionally leans the shoulders, intentionally bends the legs, it is difficult for the electronic device to determine the actual posture of the user from the acquired images.
Then, in the process of performing the posture assessment, the electronic device may first guide the user to complete the action corresponding to the preset posture assessment posture, and collect and obtain the posture assessment image when the user maintains the posture assessment posture. The electronic device may perform a posture assessment for the user using the posture assessment image.
Referring to fig. 6A, fig. 6A is a flowchart illustrating a method for acquiring a posture evaluation image by an electronic device according to an embodiment of the present application.
The method may include steps S610 to S640. Wherein:
s610, collecting images, and identifying human body key points and human body contours of a user in the images.
In some embodiments, electronic device 100 may receive a user operation to perform a posture assessment. Upon receiving the above-described user operation, the electronic device 100 may collect an image and recognize key points and contours of the user's human body in the image.
(1) And determining the user performing the morphological evaluation in the image.
For example, as shown in fig. 6B, when an image is acquired, the electronic device 100 may identify the user in the image for morphological evaluation. The electronic device 100 may identify the portrait included in the image using the portrait detection model, and determine an area of the portrait in the image using a portrait selection box. The portrait selection box may be an area with a minimum area that includes a portrait in the image. The shape of the portrait selection box may include rectangular, circular, etc. The shape of the portrait selection frame is not limited in the embodiment of the present application. The portrait detection model may be a model based on a neural network. An image is input into a portrait detection model, and the portrait detection model can output an image containing the portrait selection frame. The embodiment of the application does not limit the training method of the portrait detection model.
If one image contains a plurality of portraits, the portraits detection model can determine the areas of the portraits in the image, and different portraits are respectively marked by using different portraits selection frames. In one possible implementation, the electronic device 100 may select a portrait identified by a portrait selection box located in the center of the image for morphological evaluation. In another possible implementation, the electronic device 100 may query the user as to which portrait in the image to evaluate in a morphological manner. In response to a user's operation of selecting one of the portraits in the image, the electronic device 100 may perform a morphological evaluation of the portraits selected by the user in the image.
Not limited to the above-described portrait detection model, the electronic apparatus 100 may also determine the area in the image where the user performing the morphological evaluation is located by other methods. The area where the user carrying out the posture evaluation in the determined image is located can exclude interference objects (such as background sundries, passersby and the like) in the image, so that the influence of the interference objects on the posture evaluation is reduced, and the accuracy of the posture evaluation result is improved.
(2) Human body key points are identified, and human body key points of a user are determined.
When determining the region of the image where the user performing the posture evaluation is located, the electronic device 100 may perform human body key point recognition to determine the human body key point of the user. The electronic device 100 may perform human body key point recognition only on an area where the user performing the morphological evaluation is located in the image (i.e., an area where the portrait selection box is determined in the image). The method can reduce the complexity of operation in the human body key point identification process and improve the calculation efficiency. In one possible implementation, the electronic device 100 may utilize a human keypoint detection model to identify human keypoints. The human body key point detection model may refer to the description of the foregoing embodiments. The method for identifying the key points of the human body by the electronic device 100 is not limited in the embodiments of the present application.
Through the above-mentioned human body key point recognition, the electronic device 100 may obtain the positions of the respective human body key points of the user in the image. The position of the human body key point in the image may specifically be the position of the human body key point in the pixel coordinate system of the image. The pixel coordinate system of an image is described herein with the shape of the image as a rectangle. The pixel coordinate system of the image may be a two-dimensional rectangular coordinate system. The origin of the pixel coordinate system of the image may be located at any one of four vertices of the image, and the x-axis and the y-axis are respectively parallel to two sides of the image plane. The units of the coordinate axes in the pixel coordinate system of the image are pixels. For example, point (1, 1) in the pixel coordinate system may represent a pixel of a first row and a first column in the image.
In one possible implementation, the electronic device 100 may perform human keypoint detection on an acquired image using a human keypoint detection model to obtain a set of coordinates. The set of coordinates may be coordinates in which all human body keypoints (human body keypoints shown in fig. 2) of the user are arranged in a specified order. The set of coordinates may be coordinates of key points of the human body in a pixel coordinate system. For example, the first coordinate in the set of coordinates is the coordinate of the head point in the pixel coordinate system, and the second coordinate is the coordinate of the neck point in the pixel coordinate system. The arrangement sequence of the key points of the human body to which the coordinates belong in the group of coordinates is not limited in the embodiment of the application.
(3) Portrait segmentation, recognizing the human body outline of the user.
When determining the region of the image where the user performing the morphological evaluation is located, the electronic device 100 may further perform portrait segmentation to identify the human body contour of the user. In one possible implementation, the electronic device 100 may utilize a portrait segmentation model for portrait segmentation. The portrait segmentation model may be a model based on a neural network. An image including a person is input into a person segmentation model, which can output a binarized image. For example, in the binarized image, the pixel values of the areas where the portrait is located are 255, and the pixel values of the areas where the non-portrait is located are 0. Or the pixel values of the areas where the figures are located are all 0, and the pixel values of the areas where the non-figures are located are all 255. The embodiment of the application does not limit the training method of the image segmentation model.
The electronic device 100 may perform portrait segmentation only on an area where the user performing the morphological evaluation in the image is located (i.e., an area where the portrait selection box is determined in the image). Specifically, the electronic device 100 may perform image segmentation on the area determined by the image selection frame in the image by using the image segmentation model, to obtain a binarized pixel value of the area. The electronic device 100 may determine the pixel value of the region outside the portrait selection box in the image as the pixel value of the region where the non-human body is located (e.g., 0). The method can reduce the complexity of operation in the human image segmentation process and improve the calculation efficiency.
Through the above-described portrait segmentation, the electronic device 100 may change the acquired original image into a binarized image. In the binarized image, all pixel values of the area where the portrait is located are the same, and all pixel values of the area where the non-portrait is located are the same. But the pixel values of the areas where the portrait is located are different from the pixel values of the areas where the non-portrait is located. Then, the position where the pixel value changes in the binarized image is the edge point of the human body. All edge points of the human body in the image can form the outline of the human body.
Illustratively, fig. 6C shows pixel values of a binarized image portion area. The coordinate system shown in fig. 6C may represent a pixel coordinate system of an image. The region with the pixel value of 0 may represent the region where the non-portrait is located. The region with a pixel value of 255 may represent the region in which the portrait is located. From coordinate (1, 1) to coordinate (2, 1), the pixel value changes from 0 to 255. The electronic device 100 may determine the location between the coordinates (1, 1) and the coordinates (2, 1) as the demarcation of the area where the non-portrait is located and the area where the portrait is located. Likewise, the electronic device 100 may determine that the position between the coordinates (2, 2) and the coordinates (3, 2), and the position between the coordinates (3, 3) and the coordinates (4, 3) are all boundaries between the area where the non-portrait is located and the area where the portrait is located. The electronic device 100 may determine the pixels belonging to the region where the portrait is located at the boundary as the edge points of the human body. For example, the above (2, 1), (3, 2) and (4, 3) are all edge points. All edge points of the human body in the image are sequentially connected to form the human body outline. Alternatively, the electronic device 100 may determine the pixels belonging to the area where the non-portrait is located at the boundary as the edge points of the human body. For example, the above (1, 1), (2, 2) and (3, 3) are all edge points.
In some embodiments, the electronic device 100 may obtain a data set through the above-described portrait segmentation. This data set contains pixel values corresponding to pixels at various locations in an image. This data set may be used to generate the binarized image described above.
In some embodiments, the portrait detection model, the human keypoint detection model, and the portrait segmentation model may be one model. That is, one image is input into one model, which can detect the region where the person image is located in the image, then perform the detection of the key points of the human body and the division of the person image, and output the position of the key points of the human body in the image and the binarized image (or the data set corresponding to the binarized image).
In some embodiments, the step of determining the user in the image for morphological evaluation is optional. That is, the electronic device 100 may not use the portrait detection model to determine the user who performs the morphological evaluation in the image. The electronic device 100 obtains one image, and can directly perform human body key point recognition and human image segmentation on the one image.
S620, judging whether the gesture of the user is a preset gesture evaluation gesture according to the human body key points and the human body outline.
The posture assessment gesture described above may refer to a gesture that facilitates posture assessment of a user by electronic device 100. In some embodiments, detection of some posture problems (e.g., high-low shoulders, X-legs, O-legs, etc.) requires a frontal view of the user. The preset posture evaluation gesture may be: facing the camera, the two arms keep a certain distance from the two sides of the body, and the two legs are naturally upright and are closed. The above-mentioned preset posture evaluation posture may also be: facing towards the camera, the two arms are straightened, the two arms keep a certain distance from the two sides of the body, and the two legs are naturally upright and are closed. In other embodiments, detection of some physical problems (e.g., humpback, neck extension, etc.) requires a side view of the user. The preset posture evaluation gesture may be: the side of the body faces the camera, and stands naturally. Not limited to the above-listed posture evaluation pose, the posture evaluation pose may be other.
In the posture evaluation posture described above, the natural standing of the legs may be expressed as: the knee joints of the two knees are naturally straightened, and the included angle between the lower leg and the thigh of the single leg in the two legs is smaller than a preset included angle threshold value. The natural standing may mean: the legs are naturally upright, the trunk is straight, and the relaxed state is maintained.
It can be seen that the posture assessment image for performing the posture assessment may comprise a front view and a side view of the user.
In the application, a front view of a user is collected, an implementation method for performing body state evaluation by using the front view is described, and then an implementation method for performing body state evaluation by using a side view of the user is described.
In some embodiments, the electronic device 100 may determine whether the gesture of the user is a preset posture evaluation gesture by using the binarized image and the human key points, instead of determining the human body contour from the binarized image. In the embodiment of the application, the gesture of the user is specifically determined by using the human body contour and the human body key points as an example.
In determining whether the gesture of the user is a posture evaluation gesture for collecting a front view, the contents of the determination by the electronic device 100 may include: whether the arms are straightened, whether the arms are kept at a certain distance from the two sides of the body, whether the legs are upright, and whether the legs are closed. The method of determining whether the posture of the user is the posture evaluation posture is specifically described below in connection with the human body key points and the human body contours shown in fig. 6D. Wherein:
(1) Whether the arms are straightened.
In one possible implementation, the electronic device 100 may determine whether the user's arms are straightened based on the key points of the human body on the user's arms. The following description will be given by taking an example of judging whether the right arm is straightened.
As shown in fig. 6D, the electronic device 100 can determine whether the right arm of the user is straightened by determining whether the right shoulder point A1, the right elbow point A2, and the right wrist point A3 are on a straight line. The electronic device 100 may calculate a line segment A1A2 between the right shoulder point and the right elbow point, a line segment A2A3 between the right elbow point and the right wrist point, and a line segment A1A3 between the right shoulder point and the right wrist point. In some embodiments, determining whether A1, A2, and A3 are in a straight line may be by: if the right shoulder point A1, the right elbow point A2, and the right wrist point A3 are on the same straight line, the sum of the Length of the line segment A1A2 (Length (A1 A2)) and the Length of the line segment A2A3 (Length (A2 A3)) is the Length of the line segment A1A3 (Length (A1 A3)). I.e. Length (A1 A2) +length (A2 A3) =length (A1 A3). Then, the electronic apparatus 100 may determine whether the value of Length (A1 A2) +length (A2 A3) -Length (A1 A3) is smaller than the preset threshold value W1. The value of the above threshold W1 may be, for example, 5 pixels, 6 pixels, or the like. The value of the threshold W1 is not limited in the embodiment of the present application. If Length (A1 A2) +length (A2 A3) -Length (A1 A3) is less than threshold W1, electronic device 100 may determine that the right arm is straightened. If Length (A1 A2) +length (A2 A3) -Length (A1 A3) is greater than or equal to the threshold value W1, the electronic device 100 may prompt the user to straighten the arm.
Optionally, in other embodiments, the electronic device 100 may determine whether the difference between the angle formed by the right elbow point A2 and the right shoulder point A1, the right elbow point A2, and the right wrist point A3, that is, the angle A1A2A3 (hereinafter referred to as the angle A2), and 180 ° is smaller than the preset threshold W2. The threshold W2 may be, for example, 5 °, 6 °, or the like. The value of the threshold W2 is not limited in the embodiment of the present application. The magnitude of angle A2 may be α1. If the difference between +.A2 and 180 is less than the threshold W2, electronic device 100 may determine that the right arm is straightened. If the difference between +.a2 and 180 ° is greater than or equal to the threshold W2, the electronic device 100 may prompt the user to straighten the arm. Not limited to the above-listed methods, the electronic device 100 may also determine whether the user's right arm is straightened by other methods.
Optionally, the electronic device 100 may further determine whether the right arm of the user is straightened according to the curve of the right arm on the contour of the human body. For example, the electronic device 100 may determine the curve of the right arm on the contour of the human body based on the right shoulder point, the right elbow point, and the right wrist point. The electronic device 100 may calculate the curvature of the curve of the right arm described above. If the maximum curvature of the right arm curve is smaller than the preset curvature threshold, the electronic device 100 may determine that the right arm of the user is straightened. Otherwise, the electronic device 100 may determine that the right arm of the user is not straightened.
It will be appreciated that the method of determining whether the left arm straightens may be the same as the method of determining whether the right arm straightens. The method for judging whether the left arm is straightened is not expanded.
(2) Whether the arms are at a distance from both sides of the body.
In one possible implementation, the electronic device 100 may determine whether the arms are kept at a certain distance from both sides of the body according to the angle between the straight line where the arms are located and the median line of the trunk. The following description will be given by taking an example of determining whether the right arm is kept at a certain distance from the right side of the body.
As shown in fig. 6D, the straight line of the right arm may be a straight line A3A4 of the neck point A4 and the right wrist point A3. The midline of the torso may be a line A4A5 where the cervical point A4 and the medial left and right hip point A5 lie. The angle between the straight line A3A4 and the straight line A4A5 is α2. The electronic device 100 may determine whether α2 is within a preset threshold range W3. The threshold range W3 may be, for example, [30 °,50 ° ], 35 °,50 ° ], or the like. The value of the threshold range W3 is not limited in the embodiment of the present application. If α2 is within the preset threshold range W3, the electronic device 100 may determine that the right arm is kept a certain distance from the right side of the body. If α2 is not within the preset threshold range W3 and α2 is smaller than the minimum value of the threshold range W3, the electronic device 100 may prompt the user to raise the right arm. If α2 is not within the preset threshold range W3 and α2 is greater than the minimum value of the threshold range W3, the electronic device 100 may prompt the user to lower the arm.
Alternatively, the electronic device 100 may simply determine whether α2 is greater than a preset included angle (e.g., 30 °, 35 °, etc.). If α2 is greater than the preset angle, the electronic device 100 may determine that the right arm is at a certain distance from both sides of the body. Otherwise, the electronic device 100 may prompt the user to raise the right arm.
Alternatively, the straight line of the right arm may be a straight line A1A3 of the right shoulder point A1 and the right wrist point A3. The electronic device 100 can determine whether the right arm is kept a certain distance from the right side of the body using the straight line A1A3 and the straight line A4 A5. Reference may be made to the description of the previous embodiments for specific methods.
It will be appreciated that the method of determining whether the left arm is at a distance from the left side of the body may be the same as the method of determining whether the right arm is at a distance from the right side of the body. The method of whether the left arm is at a distance from the left side of the body is not described in detail.
In another possible implementation, the electronic device 100 may determine whether the user's arms are kept a certain distance from both sides of the body according to the key points of the human body on the user's arms and the edge points on the contour of the human body.
As shown in fig. 6D, the electronic device 100 may calculate the number of intersections of the straight line A3a12 where the right wrist point A3 and the left wrist point a12 are located and the contour of the human body. The intersection points are also edge points on the human body contour. If the user's arms are kept at a distance from the two sides of the body, 6 points of intersection of the straight line A3A12 and the contour of the human body are provided. These 6 intersections include: two edge points on the left arm profile, two edge points on the waist profile, two edge points on the right arm profile. If the user's arms are closely attached to the two sides of the body, there are no edge points between the arms and the waist, and the number of intersection points between the straight line A3A12 and the human body contour is less than 6. The electronic device 100 may determine whether the number of the intersections is 6. If the number of the intersections is 6, the electronic device 100 may determine that the user's arms are kept at a distance from both sides of the body. If the number of the intersections is less than 6, the electronic device 100 may prompt the user to raise the arm.
In the posture evaluation, keeping the arms at a certain distance from both sides of the body can enable the electronic device 100 to separate the limbs and trunk of the human body when the human image is divided, and obtain the edge points on both sides of the waist, so that the posture evaluation can be better performed.
(3) Whether the legs are upright.
In one possible implementation, the electronic device 100 may determine whether the user's legs are straightened based on key points on the user's legs. The following description will be given by taking an example of judging whether the right leg is straightened.
As shown in fig. 6D, the electronic device 100 may calculate a line segment A6A7 between the right hip point A6 and the right knee point A7, a line segment A7A8 between the right knee point A7 and the right ankle point A8, and a line segment A6A8 between the right hip point A6 and the right ankle point A8. The electronic device 100 may determine whether the difference (i.e., whether Length (a6a7+length (A7 A8) -Length (A6 A8)) obtained by subtracting the Length of the line segment A6A8 (A6 A8)) from the sum of the Length of the line segment A6A7 (A6 A7)) and the Length of the line segment A7A8 (Length (A7 A8)) is smaller than a preset threshold W4., where the value of the threshold W4 may be, for example, 5 pixels, 6 pixels, or the like.
It will be appreciated that the method of determining whether the left leg is upright may be the same as the method of determining whether the right leg is upright. The method for judging whether the left leg is upright is not expanded.
(4) Whether the legs are closed.
In one possible implementation, the electronic device 100 may determine whether the legs of the user are close according to the key points of the human body and the edge points on the contour of the human body.
As shown in fig. 6D, the electronic device 100 may determine an intersection point of a straight line A7A9 where the right knee point A7 and the left knee point A9 are located and the inner sides of both legs on the contour of the human body. For example, the intersection point of the straight line A7A9 with the inner side of the left leg on the human body contour is B1, and the intersection point with the inner side of the right leg on the human body contour is B2. The electronic device 100 may determine an intersection point of the straight line A8a10 where the right ankle point A8 and the left ankle point a10 are located and the inner sides of both legs on the contour of the human body. For example, the intersection point of the straight line A8a10 with the inner side of the left leg on the human body contour is B3, and the intersection point with the inner side of the right leg on the human body contour is B4. The intersection points B1, B2, B3, and B4 are edge points on the human body contour.
As can be seen from the above-described situation of the different legs in the two-leg closed state, when the two legs are closed, at least one of the two knees and the ankle in the two feet can be closed. Then, the electronic device 100 may determine the length of the line segment B1B2 between the intersections B1, B2 and the line segment B3B4 between the intersections B3, B4. If the length of at least one of the line segments B1B2 and B3B4 is smaller than the preset threshold W5, the electronic device 100 may determine that the legs of the user are drawn together. The threshold W5 may be, for example, 5 pixels, 6 pixels, or the like. The value of the threshold W5 is not limited in the embodiment of the present application. If the lengths of the line segments B1B2 and B3B4 are greater than or equal to the threshold W5, the electronic device 100 may prompt the user to close the legs together.
Wherein the proximity of the knee (or ankle in the bipedal) may include the abutment of the knee (or ankle in the bipedal). For example, when the knees collide, the electronic apparatus 100 may have no edge points at the positions of the knee joints on the inner sides of the legs in the human body contour obtained by performing the portrait segmentation. I.e. the straight line A7A9 has no intersection with the inner sides of the legs on the contour of the human body. Similarly, when the ankle collides with the inner side of the leg, the straight line A8A10 has no intersection with the inner side of the leg on the contour of the human body. Then, if at least one of the straight lines A7A9 and A8a10 has no intersection with the inner sides of the legs on the human body contour, the electronic device 100 may determine that the legs of the user are closed.
In some embodiments, the electronic device 100 may also determine whether the acquired image is a front view of the user. For example, the electronic device 100 may determine whether the user is facing the camera. In one possible implementation, the electronic device 100 may determine whether the user is facing the camera by detecting a face region in the image. If the face of the user performing the morphological evaluation in the image contains face key points such as left eye, right eye, lips and the like, the electronic device can determine that the user faces towards the camera. Otherwise, the electronic device 100 may prompt the user to adjust the orientation of the standing, guiding the user to stand facing the camera. For example, when the face region contains the left eye and not the right eye, the electronic device 100 may prompt the user to turn left. Optionally, the electronic device 100 may further determine whether a distance between the left eye and the right eye on the face of the user performing the morphological evaluation in the image is less than an inter-eye distance threshold. If the distance between the left and right eyes is less than the interocular distance threshold, electronic device 100 may determine that the user is not standing facing the camera. The electronic device 100 may prompt the user to adjust the orientation of the standing, guiding the user to stand facing the camera. Optionally, the electronic device 100 may further determine whether the user stands facing the camera according to the left shoulder point and the right shoulder point of the user. For example, the electronic device 100 may determine whether the distance between the left shoulder point and the right shoulder point is less than a shoulder width threshold. If the distance between the left shoulder point and the right shoulder point is less than the shoulder width threshold, the electronic device 100 may determine that the user is not standing facing the camera, thereby prompting the user to adjust the standing orientation and guiding the user to stand facing the camera. The method for judging whether the user faces the camera is not limited.
The above determination of whether the user's arms are straightened is optional, that is, in some embodiments, in determining whether the user's posture is a preset posture evaluation posture, the electronic device 100 may determine only whether the user's arms are kept a certain distance from both sides of the body, whether the legs are upright, and whether the legs are close together.
It will be appreciated that, to determine some physical problems of the user (e.g., high and low shoulders, X-legs, O-legs, etc.), the electronic device 100 may determine whether the user's posture is satisfactory to face the camera, whether the arms are at a distance from both sides of the body, and whether the legs are standing upright and are closed. The gesture determination method provided by the embodiment of the application may not be limited to only determining whether the gesture of the user is a preset posture evaluation gesture. For example, the electronic device 100 may determine only whether the user's arms are kept a distance from both sides of the body using the body keypoints and body contours. For another example, the electronic device 100 may determine only whether the user's legs are upright and closed using the body keypoints and body contours. The gesture judging method combining the human body key points and the human body contours can improve the accuracy of gesture judgment.
S630, if the posture of the user is the preset posture evaluation posture, taking the image containing the posture of the user kept by the posture evaluation as a posture evaluation image, wherein the posture evaluation image is used for carrying out posture evaluation.
In one possible implementation, when determining that the posture of the user is a preset posture evaluation posture, the electronic device 100 may prompt the user to maintain the posture evaluation posture, and collect an image with the camera. The image acquired after prompting the user to maintain the posture of the posture assessment may be the posture assessment image.
In another possible implementation manner, if the electronic device 100 determines that the posture of the user is the preset posture assessment posture by using the image acquired in step S610, the electronic device 100 may use the image determined that the posture of the user is the posture assessment posture as the posture assessment image. In this way, after determining that the posture of the user is the preset posture evaluation posture, the electronic device 100 may not acquire the image again.
S640, if the posture of the user is not the preset posture assessment posture, prompting the user to adjust the posture according to the difference between the posture of the user and the posture assessment posture.
Fig. 7A to 7D schematically illustrate a scenario in which the electronic device 100 prompts the user to adjust the gesture.
In some embodiments, upon receiving a user operation to perform a posture assessment, the electronic device 100 may display the user interface 710 shown in fig. 7A. The user interface 710 may include a display area 711 and a display area 712. Wherein:
The display area 711 may be used to display an example graph 711A of the posture assessment pose, as well as an action prompt 711B. For example, the content of action prompt 711B may be: the arms are straightened, the arms are hung down to keep a certain distance from the two sides of the body, and the legs are gathered. In this way, the user can adjust his/her posture according to the example diagram 711A and the action prompt 711B to reach a preset posture evaluation posture.
The display area 712 may be used to display images captured by the electronic device 100 via the camera 193. The image may include a user performing a morphological assessment.
As can be seen from the display area 712 shown in fig. 7A, the posture of the user is different from the posture evaluation posture. Wherein the user's arms are not straightened and the legs are not closed. The electronic apparatus 100 may determine the difference of the posture of the user from the posture evaluation posture according to the method in step S620 described above, and display the action prompt 711B in the display area 711. The electronic device 100 may also voice broadcast the content in the action prompt 711B to guide the user to adjust the posture to the posture assessment posture.
The electronic device 100 may continuously acquire an image of the user and determine whether the posture of the user is the posture evaluation posture before the posture evaluation image is obtained. The electronic device 100 may perform image capturing on the user at preset time intervals (such as 1 second, 2 seconds, etc.). If the image acquired for the first time is used for judging that the posture of the user is not the posture assessment posture, the electronic equipment can acquire the image of the user again (namely for the second time) and judge whether the posture of the user is the posture assessment posture. That is, the electronic apparatus 100 may circularly perform step S610, step S620, and step S640 until it is determined that the posture of the user is the posture evaluation posture.
Illustratively, the user adjusts the gesture according to the action prompt 711B shown in fig. 7A. The electronic device 100 may display the user interface 710 shown in fig. 7B. As can be seen from the display area 712 shown in fig. 7B, the user adjusts the posture from the curved double arms shown in fig. 7A to straighten the double arms and bring the double arms close to both sides of the body. The electronic device 100 determines that the user' S arms are not kept at a distance from both sides of the body and the legs are not closed according to the method in step S620. I.e. the gesture of the user is still not a posture assessment gesture. The electronic device 100 may display the action prompt 711C shown in fig. 7B. The content of the action prompt 711C may be: please keep the arm upper table at a certain distance from the two sides of the body and close the legs. The electronic device 100 may also voice broadcast the content of the action prompt 711C.
Illustratively, the user adjusts the gesture according to the action prompt 711C. The electronic device 100 may display the user interface 710 shown in fig. 7C. As can be seen from the display area 712 shown in fig. 7C, the user adjusts the posture from the two arms shown in fig. 7B to straighten the arms and keep the arms at a distance from the body. But the user's legs remain un-closed. I.e. the gesture of the user is still not a posture assessment gesture. The electronic device 100 may display the action prompt 711D shown in fig. 7C. The content of the action prompt 711D may be: please close the legs together. The electronic device 100 may also play the content of the action prompt 711D in voice.
Illustratively, the user adjusts the gesture according to the action prompt 711D. The electronic device 100 may display the user interface 710 shown in fig. 7D. As can be seen from the display area 712 shown in fig. 7D, the user adjusts the posture from the open legs shown in fig. 7C to the closed legs. The electronic device 100 determines that the posture of the user is the posture evaluation posture according to the method in step S620 described above. The electronic device 100 may display an action prompt 711E shown in fig. 7D to prompt the user to maintain the posture assessment pose. The content of the action prompt 711E may be: please keep the pose, i.e. take a picture. The electronic device 100 may also voice broadcast the content of the action prompt 711E. After prompting the user to maintain the posture evaluation pose, the electronic device 100 may acquire an image and take the acquired image as the posture evaluation image.
As can be seen from the embodiments shown in fig. 7A to 7D, the prompting content of the electronic device 100 for prompting the user to adjust the gesture may be changed along with the change of the gesture of the user. The electronic device 100 may prompt the user for where the user's posture is different from the posture evaluation posture. The method and the system can enable the user to know which part of the user is different from the posture evaluation posture, so that the user is better guided to complete actions corresponding to the posture evaluation posture.
The user interface 710 shown in fig. 7A to 7D is merely exemplary, and should not be construed as limiting the present application. In some embodiments, the electronic device 100 may also display only the display area 711 on the display screen. I.e., the electronic device 100 may not display the user's image on the display screen.
In some embodiments, the posture assessment image may also be acquired by other electronic devices, such as electronic device 200. A communication connection is established between the electronic device 200 and the electronic device 100. After the electronic device 200 obtains the posture assessment image, the posture assessment image may be transmitted to the electronic device 100. The electronic device 100 may utilize the posture assessment image for posture assessment.
In the posture assessment method provided in the present application, the electronic device 100 may determine one or more edge points on the human body contour using the human body key points. The one or more edge points may be used to determine whether the user's posture is a preset posture assessment posture, and may also be used to perform a posture assessment.
The method for determining one or more edge points on a human body contour using human body key points provided in the present application is described in detail below.
As can be seen from the foregoing embodiment of human body key point recognition, the electronic device 100 can obtain the coordinates of each human body key point of the user in the image in the pixel coordinate system. As can be seen from the foregoing embodiments of portrait segmentation, the electronic device 100 may obtain a data set. This data set may include pixel values corresponding to pixels at various locations on the image in a pixel coordinate system. Wherein the pixel values of the region where the portrait of the user is located in the image are all (e.g., 255). The pixel values of the areas in the image where the user's portrait is not located are all (e.g., 0). The pixel value of the region in which the person of the user is located in the image is different from the pixel value of the region in which the person is not located in the image.
In one possible implementation, the electronic device 100 may determine all edge points on the human body contour from the data set. Reference may be made to the description of the embodiment shown in fig. 6C. The electronic device 100 may determine one or more edge points on the body contour using the line in which the body keypoints are located. The human body key point A is taken as an example for illustration. The key point a may be any one of the key points shown in fig. 2. Specifically, the electronic device 100 determines the expression of the straight line L1 in which the human body key point a is located in the pixel coordinate system according to the position of the human body key point a in the pixel coordinate system. Then, the electronic device 100 can determine which pixels on the line L1 are edge points on the human body contour. In this way, the electronic device 100 can obtain the position of the intersection of the straight line L1 and the human body contour in the pixel coordinate system. The intersection point is an edge point determined by the electronic device 100 on the human body contour by using the human body key point a.
In another possible implementation, the electronic device 100 may determine the expression of the straight line L1 in which the human body key point a is located in the pixel coordinate system according to the position of the human body key point a in the pixel coordinate system. Further, the electronic device 100 may determine which pixels on the line L1 are the pixels where the pixel values in the data set change (e.g., from 0 to 255, from 255 to 0). The electronic device 100 may determine a pixel on the straight line L1 that is a position in the data set where a change in the pixel value occurs as an edge point. The edge points are equivalent to the intersection points of the straight line L1 and the human body contour, namely the edge points determined by the human body key points A on the human body contour. In this way, the electronic device 100 may obtain the position of the edge point determined by the human body key point a on the human body contour in the pixel coordinate system.
The implementation method for determining the edge points on the human body outline by using the human body key points by the electronic equipment is not limited.
The position of the intersection point of the straight line where the human body key point is located and the human body outline in the embodiment of the application can be determined through the implementation method. This will not be described in detail in the following embodiments.
The following describes in detail a method for implementing the electronic device 100 using a posture evaluation image to evaluate whether a user has a high or low shoulder, an X-leg, an O-leg, an XO-leg, and a scoliosis.
(1) High-low shoulder
In one possible implementation, the electronic device 100 may determine whether the user has a high or low shoulder using edge points determined on the human body contour by the left shoulder point and the right shoulder point.
For example, as shown in fig. 8A, the electronic device 100 may determine a straight line L2 passing through the right shoulder point and perpendicular to the x-axis of the pixel coordinate system. The intersection point of the straight line L2 and the right shoulder part on the human body contour is C1. The intersection point C1 is an edge point on the straight line L2, where the y value in the pixel coordinate system is greater than the y value of the right shoulder point, and is closest to the right shoulder point. The electronic device 100 may determine a straight line L3 passing through the left shoulder point and perpendicular to the x-axis of the pixel coordinate system. The intersection point of the straight line L3 and the left shoulder part on the human body contour is C2. The intersection point C2 is an edge point on the straight line L1, where the y value in the pixel coordinate system is greater than the y value of the left shoulder point, and is closest to the left shoulder point.
The electronic device 100 may determine whether an included angle between a straight line C1C2 where the intersection point C1 and the intersection point C2 are located and a straight line in a horizontal direction (i.e., a straight line perpendicular to a y-axis of the pixel coordinate system) is smaller than a preset threshold W6. The threshold W6 may be, for example, 5 °, 6 °, or the like. The value of the threshold W6 is not limited in the embodiment of the present application. The angle between the straight line C1C2 and the straight line in the horizontal direction is θ2. If the position of the intersection C1 in the pixel coordinate system is (Xc 1, yc 1) and the position of the intersection C2 in the pixel coordinate system is (Xc 2, yc 2), θ2=arctan [ (Yc 2-Yc 1)/(Xc 2-Xc 1) ]. If the absolute value of θ2 is less than the threshold W6, the electronic device 100 can determine that the user has no high or low shoulders. If the absolute value of θ2 is greater than or equal to the threshold W6, the electronic device 100 may determine that the user has a high shoulder or a low shoulder, and may determine whether the user is high on the left shoulder or high on the right shoulder according to the positive or negative value of θ2. For example, θ2 being negative may indicate that the user's right shoulder is higher than the left shoulder. θ2 being positive may indicate that the user's left shoulder is higher than the right shoulder. The electronic device 100 may also determine the severity of the user's high and low shoulders based on the magnitude of the absolute value of θ2. The greater the absolute value of θ2, the more serious the physical problem of the user's high and low shoulders.
Alternatively, the electronic device 100 may compare the values of the positions of the intersection C1 and the intersection C2 in the pixel coordinate system on the y-axis. For example, the position of the intersection C1 in the pixel coordinate system is (Xc 1, yc 1). The position of the intersection C2 in the pixel coordinate system is (Xc 2, yc 2). The electronic device 100 may determine whether |yc1-yc2| is less than a preset threshold W7. The absolute values of Yc1-Yc2 can be represented by Yc1-Yc 2. The threshold W7 may be, for example, 5 pixels, 6 pixels, or the like. The value of the threshold W7 is not limited in the embodiment of the present application. If Yc1-Yc2 is less than the threshold W7, the electronic device 100 may determine that the user has no high or low shoulders. If Yc1-Yc2 is greater than or equal to the threshold W7, the electronic device 100 may determine that the user has a high or low shoulder. Wherein the electronic device 100 may determine whether the user is a left shoulder height or a right shoulder height according to which of Yc1 and Yc2 is larger. For example, if Yc1 is greater than Yc2, the user's right shoulder is higher than the left shoulder. Otherwise, the left shoulder of the user is higher than the right shoulder. The electronic device 100 may also determine the severity of the user's shoulders from the magnitude of |yc1-yc2|. The bigger Yc1-Yc2, the more serious the physical problem of the user's high and low shoulders.
In some embodiments, the electronic device 100 may display the high-low shoulder evaluation result schematic diagram shown in fig. 8B in the evaluation results of the high-low shoulder. In the high-low shoulder evaluation result diagram, the electronic device 100 may display the intersection C1, the intersection C2, and the straight line C1C2 on the human body image of the user. The straight line C1C2 may represent the direction of the user's shoulders. The electronic device 100 may also display a straight line in a horizontal direction in the human body image of the user. The straight line in the horizontal direction can represent the direction of the shoulders of the human body in a normal posture (i.e., without high and low shoulders). The straight line C1C2 and the straight line in the horizontal direction can facilitate the user to visually check whether the user has high and low shoulders or not and the severity of the high and low shoulders.
As can be seen from the above embodiments, the electronic device 100 may determine the edge points at the shoulders of the human body contour according to the key points of the human body, and compare the directions of the shoulders of the user performing the posture evaluation with the directions of the shoulders of the human body in the normal posture by using the edge points, so as to obtain the evaluation result of the shoulders of the user. Because the positions of the edge points are determined, the method can reduce the influence of the detected position floating of the key points of the human body on the high-low shoulder evaluation result and improve the accuracy of the high-low shoulder evaluation result.
(2) X-shaped leg, O-shaped leg and XO-shaped leg
In one possible implementation, the electronic device 100 may determine the user's leg shape using edge points determined on the body contour by key points of the leg.
For example, as shown in fig. 8C, the electronic device 100 may determine a straight line L4 in which the right knee point and the left knee point are located. The intersection point of the straight line L4 and the inner side of the left leg on the human body contour is B1, and the intersection point of the straight line L4 and the inner side of the right leg on the human body contour is B2. The intersection B1 is an edge point on the straight line L4, where the x value in the pixel coordinate system is smaller than the x value of the left knee point, larger than the x value of the right knee point, and closest to the left knee point. The intersection B2 is an edge point on the straight line L4, where the x value in the pixel coordinate system is smaller than the x value of the left knee point, larger than the x value of the right knee point, and closest to the right knee point. The electronic device 100 may determine a straight line L5 where the right ankle point and the left ankle point are located. The intersection point of the straight line L5 and the inner side of the left leg on the human body contour is B3, and the intersection point of the straight line L5 and the inner side of the right leg on the human body contour is B4. The intersection B3 is an edge point on the straight line L5, where the x value in the pixel coordinate system is smaller than the x value of the left ankle point, larger than the x value of the right ankle point, and closest to the left ankle point. The intersection B4 is an edge point on the straight line L5, where the x value in the pixel coordinate system is smaller than the x value of the left ankle point, larger than the x value of the right ankle point, and closest to the right ankle point.
The electronic device 100 may calculate the length len (B1B 2) of the line segment B1B2 between the intersection B1 and the intersection B2, the length len (B3B 4) of the line segment B3B4 between the intersection B3 and the intersection B4, and the length len (B1B 3) of the line segment B1B3 between the intersection B1 and the intersection B3.
In some embodiments, the value of len (B1B 2) is 0 if the user's knees collide such that there is no intersection of straight line L4 with the inner sides of the legs on the contour of the human body in the posture evaluation image. If the ankle collides in the feet of the user in the posture evaluation image so that the straight line L5 has no intersection with the inner sides of the legs on the contour of the human body, the value of len (B3B 4) is 0.
Electronic device 100 may determine whether the value of len (B1B 2)/len (B1B 3) is greater than a preset threshold value W8. The threshold W8 may be, for example, 0.1, 0.15, or the like. The value of the threshold W8 is not limited in the embodiment of the present application. If len (B1B 2)/len (B1B 3) is greater than threshold W8, electronic device 100 may determine that the user's leg shape is an O-leg. It will be appreciated that a value of len (B1B 2)/len (B1B 3) greater than the threshold W8 may indicate that the user's knees are separated from close proximity with the legs naturally standing up and bunching. I.e. the user's leg shape is an O-leg. The greater the len (B1B 2)/len (B1B 3), the more severe the degree of the user's O-leg. A value of len (B1B 2)/len (B1B 3) less than or equal to the threshold value W8 may indicate that the user's knees can be brought close together with the legs naturally standing upright and closed. I.e. the user's leg shape is not an O-leg.
If the above-mentioned len (B1B 2)/len (B1B 3) is smaller than or equal to the threshold value W8, the electronic apparatus 100 may determine whether len (B3B 4)/len (B1B 3) is larger than the threshold value W9. The threshold W9 may be, for example, 0.1, 0.15, or the like. The value of the threshold W9 is not limited in the embodiment of the present application. If len (B3B 4)/len (B1B 3) is greater than the threshold value W9, the electronic device 100 may determine that the user's leg type is an X-type leg. It will be appreciated that a value of len (B3B 4)/len (B1B 3) greater than the threshold value W9 may indicate that the ankle is separated from close proximity in the user's feet with the legs naturally standing upright and bunched. I.e. the user's leg shape is an X-leg. The greater the len (B3B 4)/len (B1B 3), the more severe the degree of the user's X-leg. A len (B3B 4)/len (B1B 3) of less than or equal to the threshold W9 may indicate that the ankle can be proximate within the feet of the user with the legs naturally standing upright and bunched. I.e. the user's leg type is not an X-leg.
In the case where the legs are naturally erected and closed, if the knees of the user can be close to each other and the ankles can be close to each other in the feet, the legs of the user may be normal or XO-type. Further, if len (B3B 4)/len (B1B 3) is less than or equal to threshold W9, electronic device 100 may determine whether the left and right lower legs of the user are separated from each other and cannot be brought into close proximity. Specifically, the electronic device 100 may trisect a line segment between the left knee point and the left ankle point and determine the trisection point of the line segment (trisection point B5 and trisection point B7 as shown in fig. 8C). The electronic device 100 may trisect a line segment between the right knee point and the right ankle point and determine trisection points (trisection point B6 and trisection point B8 shown in fig. 8C) in the line segment. The electronic device 100 may determine a straight line L6 in which the trisection point B5 and the trisection point B7 are located, and a straight line L7 in which the trisection point B6 and the trisection point B8 are located.
The electronic device 100 may determine the set of edge points a on the inner side of the left leg on the human contour and the set of edge points B on the inner side of the right leg on the human contour between the straight line L6 and the straight line L7. The edge point group a and the edge point group B may each include a plurality of edge points. The electronic device 100 may select one or more pairs of edge points having the same y value in the pixel coordinate system from the edge point group a and the edge point group B. Wherein a pair of edge points comprises one edge point from edge point set a and one edge point from edge point set B. The y values of two edge points of a pair of edge points in the pixel coordinate system are the same. The electronic device 100 may calculate a distance between two edge points of a pair of edge points, which may be denoted as len (a pair of edge points). The electronic apparatus 100 may determine whether len (a pair of edge points)/len (B1B 3) is greater than a preset threshold W10. The threshold W10 may be, for example, 0.1, 0.2, or the like. The value of the threshold value W10 is not limited in the embodiment of the present application. If the above-described pairs of edge points each satisfy len (a pair of edge points)/len (B1B 3) greater than the threshold W10, the electronic apparatus 100 may determine that the left and right lower legs of the user are separated from each other and cannot be brought close to each other. I.e. the user's leg type is an XO-type leg. Otherwise, the electronic device 100 may determine that the user's leg type is a normal leg type. The greater the average value obtained by len (one pair of edge points)/len (B1B 3) of the plurality of pairs of edge points, the more serious the degree of the user XO-type leg.
The edge points on the outline of the human body are not limited to three-equally dividing the line segment between the left knee point and the left ankle point, and the line segment between the right knee point and the right ankle point. The electronic device 100 may further divide the line segment between the left knee point and the left ankle point, and the line segment between the right knee point and the right ankle point into more equal divisions (such as four equal divisions and five equal divisions) to select the edge points on the outline of the human body, so as to determine whether the left calf or the right calf of the user is separated and cannot be close. Alternatively, the electronic device 100 may also bisect a line segment between the left knee point and the left ankle point, and a line segment between the right knee point and the right ankle point. The electronic device 100 may select an edge point having a difference of a y value of the pixel coordinate system and a y value of the bisecting point by a preset difference value from edge points on the inner sides of the legs of the human body, and determine whether the left or right calf of the user is separated and cannot be closed by using the selected edge point. The method for selecting the edge point which is used for judging whether the left or right lower leg of the user is separated and cannot be closed is not limited in the embodiment of the application.
In some embodiments, in addition to the above-mentioned intersection B1 and intersection B2, the electronic device 100 may also select multiple pairs of edges from edge points inside the user's knees to determine whether the user's knees are close. For example, the electronic device 100 may translate Δy pixels in a straight line L4 in a positive direction of the y-axis of the pixel coordinate system. The electronic device 100 may determine two edge points on the inner sides of the legs of the human body contour and the straight line obtained by translating the straight line L4, and determine whether the ratio of the distance ratio len (B1B 3) between the two edge points is greater than the threshold W8. If the edge points of the medial sides of the knees are all satisfied: if the ratio of the distance ratio len (B1B 3) between the two edge points included in the pair of edge points is greater than the above-mentioned threshold W8, the electronic device 100 may determine that the leg type of the user is an O-type leg.
Similarly, in addition to the intersection B3 and the intersection B4, the electronic device 100 may also select a plurality of pairs of edge points among the edge points of the ankle in the feet of the user to determine whether the ankle in the feet of the user is close.
The method for judging the leg type of the user by selecting the pairs of edge points can reduce the influence of factors such as errors in the human image segmentation process, clothes worn by the user and the like on the leg type evaluation result, and improve the accuracy of the leg type evaluation result.
In the above method of evaluating the user's leg shape, the distance between edge points inside the knees, the distance between edge points of the ankle in the feet, and the distance between edge points inside the left and right calves are normalized. That is, the leg shape of the user is determined by the ratio of the distance to len (B1B 3). The normalization process described above may enable the method of assessing leg types described above to be adapted to different users. In general, if the legs of two users are O-shaped legs of the same degree, the longer the legs, the greater the distance between the knees of the user. If the legs of two users are of the same degree of X-shaped legs, the longer the legs, the greater the distance between the ankles in the feet of the user. If the legs of two users are similar XO-type legs, the longer the legs are, the greater the distance between the left and right calf is.
The electronic device 100 may perform normalization processing by using the distance between the intersection point B2 and the intersection point B4 on the inner side of the right leg on the human body contour, not limited to performing normalization processing by calculating the ratio of any one of the distance between the edge points on the inner side of the knee, the distance between the edge points of the ankle in the foot, and the distance between the edge points on the inner sides of the left and right lower legs to len (B1B 3). The specific method of the normalization processing in the embodiment of the present application is not limited.
In some embodiments, electronic device 100 may first determine whether the user is an X-leg by determining whether len (B3B 4)/len (B1B 3) is greater than the threshold W9 described above. If the user is not an X-leg (i.e., len (B3B 4)/len (B1B 3) is less than or equal to the threshold W9), electronic device 100 may further determine if the user is an O-leg by determining if len (B1B 2)/len (B1B 3) is greater than the threshold W8. That is, in the process of evaluating the user's leg type, the embodiment of the present application does not limit the order of judging the X-type leg, the O-type leg, and the XO-type leg.
In some embodiments, the electronic device 100 may display the leg type evaluation result schematic diagram shown in fig. 8D in the leg type evaluation result. In the leg type evaluation result diagram, the electronic device 100 may display human body key points such as a left knee point, a right knee point, a left ankle point, a right ankle point, and the like of the user on the human body image of the user. The electronic device 100 may also display a line representing the shape of the user's legs in the user's body image. The lines may be obtained by fitting key points of the human body of the legs of the user, or may be obtained by fitting edge points of the legs on the contour of the human body. Through the human body key points of the legs and the lines representing the shapes of the two legs of the user, the user can intuitively know the leg shape of the user. The embodiment of the application does not limit the display form of the leg type evaluation result schematic diagram. More or fewer markers may also be included on the legged assessment results schematic.
As can be seen from the above embodiments, the electronic device 100 can determine the distance between the edge points inside the knees, the distance between the edge points of the ankles inside the feet, and the distance between the edge points inside the left and right lower legs according to the key points of the human body and the contour of the human body. And comparing the leg shape of the user performing posture assessment with the leg shape of the human body in a normal posture by utilizing the distance to obtain a leg shape assessment result of the user. Since the positions of the edge points are determined, the distance between the edge points of the human body contour, where the left leg and the right leg are at the same height, can better reflect the leg shape of the user. This can reduce the influence of the position floating of the human body key points on the leg type evaluation result when the leg type evaluation is performed by using only the human body key points, and improve the accuracy of the leg type evaluation result.
(3) Scoliosis (scoliosis)
In one possible implementation, electronic device 100 may utilize keypoints on the torso to determine whether the user has a scoliosis.
For example, as shown in fig. 8E, the electronic device 100 may determine a line segment L8 between the right shoulder point and the right hip point, and a line segment L9 between the left shoulder point and the left hip point. The electronic device 100 may calculate the length of the line segment L8 to be len (L8), and the length of the line segment L9 to be len (L9). The electronic apparatus 100 may determine whether min [ len (L8), len (L9) ]/max [ len (L8), len (L9) ] is smaller than a preset threshold W11. The above threshold W11 may be a value smaller than or equal to 1 and close to 1, for example, 0.95, 0.9, or the like. The value of the threshold W11 is not limited in this embodiment. The above-mentioned min [ len (L8), len (L9) ] may indicate that the smallest value among len (L8), len (L9) is selected. The above-mentioned max [ len (L8), len (L9) ] may indicate that the largest value among len (L8), len (L9) is selected.
It will be appreciated that if the user does not have a scoliosis, the length of len (L8) and len (L9) should be the same or the gap should be particularly small. Then, if min [ len (L8), len (L9) ]/max [ len (L8), len (L9) ] is less than the above-mentioned threshold W11, electronic device 100 may determine that the user has a scoliosis. If min [ len (L8), len (L9) ]/max [ len (L8), len (L9) ] is greater than or equal to threshold W11, electronic device 100 may determine that the user is not scoliotic.
In another possible implementation, the electronic device 100 may determine whether the user has scoliosis using edge points determined on the contour of the body by key points on the torso.
For example, as shown in fig. 8F, the electronic device 100 may determine a straight line L10 passing through the right shoulder point and perpendicular to the x-axis of the pixel coordinate system. The intersection point of the straight line L10 and the right shoulder part on the human body contour is F1. The electronic device 100 may determine a straight line L11 passing through the left shoulder point and perpendicular to the x-axis of the pixel coordinate system. The intersection point of the straight line L11 and the left shoulder part on the human body contour is F2. The electronic device 100 may determine a straight line L12 passing through the right hip point and perpendicular to the y-axis of the pixel coordinate system. The intersection point of the straight line L12 and the right waist part on the human body outline is F3. The intersection point F3 is an edge point on the straight line L12, where the x value in the pixel coordinate system is smaller than the x value of the right hip point and is closest to the right hip point. The electronic device 100 may determine a straight line L13 passing through the left hip point and perpendicular to the y-axis of the pixel coordinate system. The intersection point of the straight line L13 and the left waist part on the human body outline is F4. The intersection point F4 is an edge point on the straight line L13, where the x value in the pixel coordinate system is greater than the x value of the left hip point and is closest to the left hip point.
Electronic device 100 may calculate the length of line segment F1F3 between intersection point F1 and intersection point F3 to obtain len (F1F 3). The electronic device 100 may calculate the length of the segment F2F4 between the intersection F2 and the intersection F4, resulting in len (F2F 4). The electronic apparatus 100 may determine whether min [ len (F1F 3), len (F2F 4) ]/max [ len (F1F 3), len (F2F 4) ] is smaller than the preset threshold value W11 described above. If min [ len (F1F 3), len (F2F 4) ]/max [ len (F1F 3), len (F2F 4) ] is less than the above-mentioned threshold W11, electronic device 100 may determine that the user has a scoliosis. If min [ len (F1F 3), len (F2F 4) ]/max [ len (F1F 3), len (F2F 4) ] is greater than or equal to threshold W11, electronic device 100 may determine that the user is not scoliotic.
In some embodiments, the electronic device 100 may determine a scoliosis assessment result schematic from the human body keypoints and the human body contours.
Fig. 8G to 8I schematically illustrate a process of determining a scoliosis evaluation result by the electronic device 100.
As shown in fig. 8G, the electronic device 100 may trisect a line segment between the neck point and the abdomen point, and determine a trisection point E1 closest to the abdomen point from the trisection points of the line segment. The electronic device 100 may determine a straight line L14 passing through the trisection point E1 and perpendicular to the y-axis of the pixel coordinate system. The intersection point of the straight line L14 and the left waist on the human body outline is E2. The intersection E2 is an edge point on the straight line L14, where the x value in the pixel coordinate system is greater than the x value of the trisection E1 and is closest to the trisection E1. The electronic device 100 may determine a straight line L15 passing through the middle points of the left and right hips and perpendicular to the y-axis of the pixel coordinate system. The intersection point of the straight line L15 and the left waist on the human body outline is E3. The intersection E3 is an edge point on the straight line L15, where the x value in the pixel coordinate system is greater than the x value of the middle point of the left and right hip, and is closest to the middle point of the left and right hip.
The electronic device 100 may translate the segment E2E3 between the intersection E2 and the intersection E3 to the left to a position where the intersection E3 coincides with the middle point of the left and right hip. The position of the intersection E2 after the above-described translation is a point E4 on the straight line L14. Thus, the electronic device 100 can obtain a line segment L16 between the neck point and the point E4, and a line segment L17 between the point E4 and the middle points of the left and right hips shown in fig. 8H.
In one possible implementation, the electronic device 100 may perform curve fitting using pixels on line segment L16 and pixels on line segment L17 to obtain a curve segment L18 between the neck point and the middle point of the left and right hips as shown in fig. 8I. The above-described method of curve fitting may include least squares curve fitting, cubic curve fitting, and the like. The method of curve fitting is not limited in this embodiment. The curved segment L18 may represent the shape of the user's spine. Wherein the curvature of curved segment L18 may be indicative of the severity of a user scoliosis. For example, the electronic device 100 may calculate the maximum curvature of the curve segment L18. The greater the maximum curvature of curved segment L18, the more severe the degree of user scoliosis. The method for determining the severity of the scoliosis of the user by the electronic device 100 according to the embodiment of the present application is not limited.
The electronic device 100 may display the scoliosis evaluation result schematic diagram shown in fig. 8I in the evaluation result of scoliosis. In the scoliosis evaluation result schematic diagram, the electronic device 100 may display the curve segment L18 obtained by the fitting on the human body image of the user. So that the user can visually see if he has scoliosis or not and the severity of the scoliosis through the curved line segment L18.
As can be seen from the above embodiments, the electronic device 100 can fit the shape of the user's spine by using the key points of the human body on the trunk of the user and the edge points determined on the contour of the human body according to the key points of the human body. In this way, the electronic device 100 may display a curve for representing the shape of the spine of the user in the scoliosis evaluation result schematic diagram, so that the user can conveniently know whether the user has scoliosis and the severity of the scoliosis.
In addition to capturing a front view of the user to perform a posture assessment of the user, the electronic device 100 may also capture a side view of the user and perform a posture assessment of the user using the side view as a posture assessment image. Wherein, for the purpose of posture assessment, the posture assessment posture which is required to be maintained by the user by the front view of the acquisition user can be different from the posture assessment posture which is required to be maintained by the user by the side view of the acquisition user.
The following describes in detail a method for implementing the electronic device 100 to collect a side view of a user and use the side view to perform a posture assessment.
In one possible implementation, the electronic device 100 may guide the user to complete the actions corresponding to collecting the side view of the user requiring the posture assessment posture maintained by the user. Here, a posture in which the body side faces the camera is estimated in a posture of a body, and a posture of a natural standing is explained as an example. The method for obtaining the posture evaluation image by the electronic device 100 may refer to the method flowchart shown in fig. 6A. That is, the electronic device 100 may continuously collect an image of the user, and determine whether the posture of the user is the same as the posture evaluation posture according to the key points of the human body and the contour of the human body of the user in the image. The electronic device 100 may prompt the user to adjust the pose until the pose of the user is the same as the posture assessment pose, thereby obtaining a posture assessment image.
Another implementation method for determining whether the gesture of the user is the same as the posture evaluation gesture provided by the embodiment of the application is described herein.
As shown in fig. 9, the electronic device 100 may identify human body keypoints of a user in an image. The electronic device 100 may determine whether the posture of the user is a standing posture according to the human body key points. In addition, the electronic device 100 may also utilize the left hip point and the right hip point to determine whether the user is facing the camera on the body side. It will be appreciated that if the user's body side is facing the camera, the distance between the user's left and right hip points in the image is small. The electronic device 100 may determine a distance between the left and right hip points based on the locations of the left and right hip points in the pixel coordinate system.
If the distance between the left hip point and the right hip point is less than the preset threshold W12, the electronic device 100 may determine that the body side of the user is facing the camera. Then, the electronic device 100 may take an image in which it is judged that the distance between the left hip point and the right hip point is smaller than the threshold W12 as the posture evaluation image. Alternatively, after determining that the distance between the left hip point and the right hip point is smaller than the threshold W12, the electronic device 100 may prompt the user to keep the posture unchanged, collect an image, and use the image as the posture evaluation image. The threshold W12 may be, for example, 5 pixels, 6 pixels, or the like. The value of the threshold W12 is not limited in the embodiment of the present application.
If the distance between the left hip point and the right hip point is greater than or equal to the threshold W12, the electronic device 100 may determine that the posture of the user is not the preset posture assessment posture, and prompt the user to adjust the posture according to the difference between the posture of the user and the posture assessment posture. In one possible implementation, the electronic device 100 may determine the x-values for the left and right hip points in the pixel coordinate system. If the x value of the right hip point is less than the x value of the left hip point, the electronic device 100 may prompt the user to turn right. Wherein, the distance between the left hip point and the right hip point is greater than or equal to the threshold W12, and the x value of the right hip point being smaller than the x value of the left hip point may indicate that the user turns to the right from the front toward the camera. However, the user does not turn right enough, and the body of the user does not face the camera completely, and needs to continue to rotate right. Conversely, if the x value of the right hip point is greater than the x value of the left hip point, the electronic device 100 may prompt the user to turn left.
The electronic device 100 may also determine whether the body of the user is facing the camera by other methods, not limited to determining whether the body of the user is facing the camera by the left hip point and the right hip point described above.
For example, in directing the user to complete the corresponding action of the posture assessment gesture, the electronic device 100 may display the user interface 1010 shown in fig. 10A. The user interface 1010 may include a display area 1011 and a display area 1012. Wherein:
the display area 1011 may be used to display an example plot 1011A of the posture assessment pose, as well as an action prompt 1011B. As can be seen from the example diagram 1011A of the posture evaluation posture, the posture evaluation posture is a posture in which the body side faces the camera, and the body stands naturally. The action prompt 1011B may contain the following contents: please stand side-to-side on the screen. In this way, the user can adjust his/her posture according to the example diagram 1011A and the action prompt 1011B to reach a preset posture evaluation posture.
The display area 1012 may be used to display images captured by the electronic device 100 via the camera 193. The image may include a user performing a morphological assessment. The user can view his/her gestures from the images in the display area 1012. Thus, the user can intuitively understand the difference between the posture of the user and the posture evaluation posture, and the posture adjustment can be quickly performed.
As can be seen from the display area 1012 shown in fig. 10A, the posture of the user is different from the posture evaluation posture. With the user facing the camera 193 and standing on the screen side. The electronic device 100 may determine the difference between the posture of the user and the posture evaluation posture according to the embodiment shown in fig. 9, and display the action prompt 1011B on the display area 1011. The electronic device 100 may also play the content in the action prompt 1011B in voice to guide the user to adjust the posture to the posture assessment posture.
Illustratively, the user may adjust the gesture based on the action prompt 1011B shown in fig. 10A. The electronic device 100 may display the user interface 1010 shown in fig. 10B. As can be seen from the display area 1012 shown in fig. 10B, the user turns to the right a certain angle from the front side toward the camera shown in fig. 10A. The electronic device 100 may analyze the image of the user in the posture of the display area 1012 shown in fig. 10B according to the embodiment shown in fig. 9, to determine that the user has not rotated to the right enough, and the posture of the user is not the posture evaluation posture. The electronic device 100 may display the action prompt 1011C shown in fig. 10B. The content of the action prompt 1011C may be: please turn right. The electronic device 100 may also play the content of the action prompt 1011C in voice.
Illustratively, the user may adjust the gesture based on the action cues 1011C. The electronic device 100 may display the user interface 1010 shown in fig. 10C. As can be seen from the display area 1012 shown in fig. 10C, the user turns to the right by a certain angle based on the posture of the display area 1012 shown in fig. 10B. The electronic device 100 may analyze the image of the user in the posture of the display area 1012 shown in fig. 10C according to the embodiment shown in fig. 9, to determine that the posture of the user is the posture evaluation posture. The electronic device 100 may display an action prompt 1011D shown in fig. 10C to prompt the user to maintain the posture assessment pose. The content of the action prompt 1011D may be: please keep the pose, i.e. take a picture. The electronic device 100 may also play the content of the action prompt 1011D in voice. After prompting the user to maintain the posture evaluation pose, the electronic device 100 may acquire an image and use the acquired image as the posture evaluation image.
The user interface 1010 shown in fig. 10A to 10C is merely an exemplary illustration, and should not be construed as limiting the present application.
As can be seen from the embodiments shown in fig. 10A to 10C, the prompting content of the electronic device 100 for prompting the user to adjust the gesture may be changed along with the change of the gesture of the user. The electronic device 100 may prompt the user for where the user's posture is different from the posture evaluation posture. The method and the system can enable the user to know which part of the user is different from the posture evaluation posture, so that the user is better guided to complete actions corresponding to the posture evaluation posture.
An implementation of evaluating whether a user is humpback or not by using a side view of the user as a posture evaluation image provided in an embodiment of the present application is described herein.
In one possible implementation, the electronic device 100 may determine edge points of the user's back on the body contour using body keypoints to determine whether the user is humpback.
For example, as shown in fig. 11A, the electronic device 100 may determine the direction of the user's body according to the key points of the body. It will be appreciated that the user may rotate to the right from a standing position facing the camera to perform a corresponding action to a position where the body side faces the camera. The user can also rotate leftwards from the standing posture facing the camera to finish the corresponding actions of the posture of the body side facing the camera. That is, it is possible for the user's body to be oriented in the negative direction of the x-axis (i.e., to the left) in the pixel coordinate system, with the x-value of the edge points of the chest area being smaller than the x-value of the edge points of the back area on the contour of the human body. It is also possible that the user's body is oriented in the positive direction of the x-axis (i.e. to the right) in the pixel coordinate system, the x-value of the edge points of the chest area on the contour of the human body being greater than the x-value of the edge points of the back area. In one possible implementation, the electronic device 100 may determine the orientation of the user's body using the left and right ankle points of the user. Wherein, the electronic device 100 may compare the y value of the left ankle point and the y value of the right ankle point in the pixel coordinate system. If the y value of the left ankle point is smaller than the y value of the right ankle point, the electronic device 100 may determine that the body of the user is oriented in the negative direction of the x-axis in the pixel coordinate system. If the y value of the left ankle point is greater than the y value of the right ankle point, the electronic device 100 may determine that the body of the user is oriented in the positive direction of the x-axis in the pixel coordinate system. The method for determining the direction of the body of the user by the electronic device 100 according to the embodiment of the present application is not limited.
In the subsequent embodiments of the present application, the negative direction of the body of the user toward the x-axis in the pixel coordinate system will be described as an example. Those skilled in the art will appreciate that: the method of judging whether or not the user is humpback and the severity of humpback in the case where the user's body is oriented in the negative direction of the x-axis in the pixel coordinate system is the same as the method of judging whether or not the user is humpback and the severity of humpback in the case where the user's body is oriented in the positive direction of the x-axis in the pixel coordinate system. The embodiments of the present application do not develop a description of a method of determining whether a user is humpbacked and the severity of humpback in the case where the user's body is oriented in the positive direction of the x-axis in the pixel coordinate system.
After determining the orientation of the user's body, the electronic device 100 may select a back curve on the body contour. Specifically, the electronic device 100 may determine a straight line L19 passing through the left shoulder point and perpendicular to the y-axis of the pixel coordinate system. The intersection of the straight line L19 and the upper back region of the human body contour is G1. That is, the intersection G1 is an edge point on the straight line L19 where the x value in the pixel coordinate system is larger than the x value of the left shoulder point and closest to the left shoulder point. The electronic device 100 may trisect a line segment between the neck point and the abdomen point and determine a trisection point G2 closest to the abdomen point among the trisection points of the line segment. The electronic device 100 may determine a straight line L20 passing through the third bisector G2 and perpendicular to the y-axis of the pixel coordinate system. The intersection of the straight line L20 and the upper back region of the human body contour is G3. That is, the intersection point G3 is an edge point on the straight line L20 where the x value in the pixel coordinate system is larger than the x value of the third-aliquotients G2 and closest to the third-aliquotients G2.
The electronic device 100 may select a curve segment between the intersection point G1 and the intersection point G3 of the back region on the human body contour, to obtain a back curve segment G1G3. Wherein the y value included in the pixel coordinate system on the back curve segment G1G3 is greater than or equal to the y value of G3, and the y value is less than or equal to the y value of G1. The x values of the edge points on the back curve segment G1G3 in the pixel coordinate system are all larger than the x value of the left shoulder point.
As shown in fig. 11B, the electronic device 100 may calculate the area of a circle having a straight line segment between the intersection point G1 and the intersection point G3 as a diameter. For example, the area of the circle is S1. The electronic device 100 may also calculate the area of the closed region constituted by the straight line segment between the intersection point G1 and the intersection point G3 and the back curve segment G1G3. For example, the area of the closed region is S2. The electronic device 100 can determine whether the user is humpback according to the ratio of S2 to S1. If S2/S1 is less than the preset threshold W13, the electronic device 100 may determine that the user has no humpback. The threshold W13 may be, for example, 0.05, 0.1, or the like. The value of the threshold W13 is not limited in the embodiment of the present application. If S2/S1 is greater than or equal to threshold W13, electronic device 100 may determine that the user is humpback. Wherein, the larger the S2/S1, the more serious the humpback degree of the user.
Optionally, the electronic device 100 may also smooth the back curve segment G1G3 by using edge points on the back curve segment G1G 3. The embodiment of the present application does not limit the specific method of the smoothing processing. The electronic device 100 may calculate the curvature of the curve segment after the above smoothing process, and use the maximum curvature as the humpback index. The electronic device 100 may select one pixel at preset intervals (e.g., 4 pixels, 5 pixels, etc.) on the smoothed curve segment to calculate the curvature of the curve segment at the position where the one pixel is located. This can reduce the amount of computation of the electronic device 100 and improve the efficiency of determining whether the user is humpbacked.
If the humpback index is less than the preset humpback index threshold, the electronic device 100 can determine that the user is not humpback. If the humpback index is greater than or equal to the preset humpback index threshold, the electronic device 100 can determine the user humpback. The greater the humpback index, the more severe the user humpbacks.
The method for selecting the back curve of the electronic device 100 according to the embodiment of the present application is not limited. For example, the electronic device 100 may also select the back curve according to a bisection point of a line segment between the neck point and the abdomen point, or an bisection point obtained by dividing the line segment between the neck point and the abdomen point by four or more.
In some embodiments, the electronic device 100 may display a body image of the user standing with the body side facing the camera in the evaluation of humpback, and mark the back curve segment G1G3 shown in fig. 11A on the body image. The degree of curvature of the dorsal curve segment G1G3 may be indicative of the severity of the user's humpback. In this way, the user can intuitively understand whether or not he or she is humpback and the severity of humpback through the back curve segment G1G3.
From the above embodiments, humpback is mainly reflected in the back bulge of the user. The electronic device 100 may first distinguish between the chest area and the back area in the human body contour, and then select an edge point in the back area to determine whether the user is humpback. The method can avoid confusion of the edge points of the chest area and the back area in the judging process, and improves the accuracy of humpback detection results. And the position of the edge point of the back area on the human body contour is determined, and whether the user humps or not and the severity of humpback can be more accurately judged by utilizing the edge point of the back area.
In some embodiments, the electronic device 100 may provide an option for posture assessment. The posture assessment options may include: high and low shoulders, legs, scoliosis and humpback. The electronic device 100 may determine whether only the posture evaluation image of the front side of the human body needs to be photographed, or only the posture evaluation image of the side of the human body needs to be photographed, or both the posture evaluation image of the front side of the human body and the posture evaluation image of the side of the human body, according to the option of the posture evaluation selected by the user.
For example, when the selected posture assessment options include one or more of high-low shoulders, legs, and scoliosis, the electronic device 100 may guide the user to perform the corresponding actions of posture assessment posture 1 (e.g. facing the camera, having two arms extended, having two arms kept at a distance from two sides of the body, having two legs naturally standing up and closing up) according to the embodiments shown in fig. 7A to 7D, so as to capture a posture assessment image of the front of the human body. The electronic device 100 may perform a posture evaluation on the user according to the posture evaluation image of the front surface of the human body, to obtain one or more of a high-low shoulder evaluation result, a leg evaluation result, and a scoliosis evaluation result. When the selected posture evaluation option includes humpback, the electronic device 100 may guide the user to complete the corresponding action of the posture evaluation posture 2 (such as the body side facing the camera and standing naturally) according to the embodiment shown in fig. 10A to 10C, so as to capture the posture evaluation image of the body side. The electronic device 100 may perform a posture evaluation on the user according to the posture evaluation image of the human body side, to obtain a humpback evaluation result.
In some embodiments, the electronic device 100 may guide the user to complete the actions corresponding to the posture assessment gesture 1 and the posture assessment gesture 2, so as to obtain a posture assessment image of the front surface of the human body and a posture assessment image of the side surface of the human body. Then, the electronic device 100 may perform posture evaluation on the user to obtain a high-low shoulder evaluation result, a leg evaluation result, a scoliosis evaluation result, and a humpback evaluation result.
In some embodiments, the electronic device 100 may guide the user through the action corresponding to posture assessment gesture 3. The posture evaluation posture 3 may be a posture in which both arms are kept at a distance from both sides of the body and both legs are naturally erected and closed. Alternatively, the posture evaluation posture 3 may be a posture in which both arms are straightened, both arms are kept at a distance from both sides of the body, and both legs are naturally erected and closed. The electronic apparatus 100 can take images of a plurality of angles (e.g., front view, side view, etc.) of the user while maintaining the posture evaluation posture 3. In one possible implementation manner, the electronic device 100 may first guide the user to complete the action corresponding to the posture assessment posture 1, and collect a front view of the user holding the posture assessment posture 3. The electronic device 100 may then prompt the user to maintain the posture assessment pose 3 and rotate 90 ° to the left (or right), gathering a side view of the user maintaining the posture assessment pose 3. The electronic device 100 may also prompt the user to rotate the posture assessment pose 3 by an angle other than 90 ° (e.g., 45 ° or the like), and acquire images of other angles at which the user maintains the posture assessment pose 3.
The electronic device 100 may perform 3D modeling using the images of the plurality of angles to obtain a 3D mannequin with a posture of the posture 3. The electronic device 100 may perform posture evaluation by using the front view and the side view of the 3D mannequin to obtain a high-low shoulder evaluation result, a leg evaluation result, a scoliosis evaluation result, and a humpback evaluation result. The above-described method for performing the posture assessment using the front view and the side view of the 3D human body model may refer to the method for performing the posture assessment using the posture assessment image of the front face of the human body and the posture assessment image of the side face of the human body in the foregoing embodiment. And will not be described in detail here.
It can be appreciated that the human body contour determined by using the 3D human body model is generally smoother than the human body contour determined by directly using the image obtained by shooting the user to perform portrait segmentation, so that the gesture of the user can be reflected better, and the posture evaluation accuracy is improved.
Fig. 12A to 12E are schematic views illustrating user interfaces of some of the posture evaluation results provided in the embodiments of the present application.
In some embodiments, upon receiving a user operation to perform a posture assessment, the electronic device 100 may acquire an image of the user and perform the posture assessment. The electronic device 100 may store the posture assessment results. In this way, the user can view the results of this time and once the posture assessment has been made.
As shown in fig. 12A, the electronic device 100 may display a user interface 1210. The user interface 1210 may be used for a user to view the results of one or more morphological evaluations. The user interface 1210 may include a title 1211, a posture analysis report option 1212, a posture analysis report option 1213, a posture analysis report option 1214, and a human schematic 1215. Wherein:
title 1211 may be used to indicate the content contained in user interface 1210. For example, the title 1211 may identify "history report" for the text. I.e. the user can search for his own posture assessment through the user interface 1210, resulting in a posture assessment result.
The posture analysis report options 1212-1214 may indicate the posture assessment results from the user performing the posture assessment at different times. For example, the posture analysis report option 1212 may include a time (e.g., 10 points 22 minutes at 12 months 12 days 2021) for which the posture evaluation result corresponding to the option is obtained. This time is the time at which the user performs the physical assessment. The posture analysis report option 1212 may be used to trigger the electronic device 100 to display the posture assessment results corresponding to the option. The posture analysis reporting option 1213 and the posture analysis reporting option 1214 may refer to the introduction of the posture analysis reporting option 1212.
The body schematic 1215 may be used to present a body image of the user. The human body image may be a human body image in an image acquired by the electronic device 100. Alternatively, the body image may be a body contour of the user. Alternatively, the body image may be a 3D body model obtained by 3D modeling the user. The embodiment of the present application does not limit the display form of the human body image. When one of the posture analysis report options is in the selected state, the posture evaluation result corresponding to the posture analysis report option in the selected state can be marked on the human body image. For example, the posture analysis report option 1212 is in the selected state. In the posture evaluation result corresponding to the posture analysis report option 1212, the user has high and low shoulders, humpback and scoliosis, but the leg shape is normal. The part of the human body image in the human body diagram 1215 where the shoulder is marked with a high shoulder and a low shoulder, the corresponding part of the back is marked with a humpback, the part of the trunk is marked with a scoliosis, and the part of the leg is marked with a normal leg. In this way, the user can quickly learn the posture evaluation result obtained by performing the posture evaluation on the user interface 1210.
In response to an operation, such as a touch operation, on the posture analysis report option 1212, the electronic device 100 may display a user interface 1220 shown in fig. 12B. User interface 1220 may include title 1221, time 1222, high and low shoulder options 1223, scoliosis options 1224, leg options 1225, humpback options 1226, posture assessment results schematic 1227, shoulder schematic 1223A, high and low shoulder results analysis display area 1223B, and course recommendation display area 1223C. Wherein:
a title 1221 may be used to indicate the content contained in the user interface 1220. The heading 1221 may identify "posture assessment report" for the text.
Time 1222 may be used to indicate the time at which the posture assessment report presented by user interface 1220 was obtained. For example, 10 points 22 points 12 months 12 of 2021.
The posture assessment results diagram 1227 may be used to indicate a user's posture problem. The posture assessment result diagram 1227 may include a camera-facing human body image. The body image may include indicia (e.g., dots, lines, etc.) thereon that indicate the posture. In one possible implementation, when the evaluation options (e.g., high-low shoulder option 1223, scoliosis option 1224, leg option 1225) corresponding to the posture problem that can be evaluated by using the front view of the user are in the selected state, the electronic device 100 may display the posture evaluation result schematic 1227 shown in fig. 12B. For example, the human body image of the posture evaluation result diagram 1227 may include a mark indicating whether the user has high or low shoulders, a mark indicating whether the user has scoliosis, and a mark indicating the user's leg shape. These labels may be referenced to the embodiments shown in fig. 8B, 8D, 8I described above. And will not be described in detail here.
The high and low shoulder options 1223, scoliosis option 1224, leg options 1225, and humpback option 1226 may be used by the user to view the high and low shoulder, scoliosis, leg, and humpback evaluation results, respectively, obtained by the user performing this posture evaluation.
In response to a user operation of the high-low shoulder option 1223, the electronic device 100 may display the shoulder diagram 1223A, the high-low shoulder result analysis display area 1223B, and the course recommendation display area 1223C described above on the user interface 1220. The shoulder diagram 1223A may be a partial enlarged view of the shoulder in the posture evaluation result diagram 1227. The shoulder diagram 1223A may facilitate a user to more clearly view the condition of his own shoulder. The high and low shoulder result analysis display area 1223B may display a high and low shoulder result analysis. The high-low shoulder result analysis may include which shoulder is higher and the difference in height (e.g., right shoulder is 0.9 cm above left shoulder), the severity of the high-low shoulder (e.g., mild), etc. The embodiment of the application does not limit the content of analysis of the high-low shoulder result. When it is determined that the user has a high-low shoulder, the electronic device 100 may also recommend a course for improving the high-low shoulder to the user. The course recommendation display area 1223C may contain courses recommended by the electronic device 100 described above.
As shown in fig. 12C, in response to operation of scoliosis option 1224, electronic device 100 may display on user interface 1220 a torso schematic 1224A, a scoliosis result analysis display area 1224B, and a lesson recommendation display area 1224C. Herein, torso diagram 1224A may be a partial enlarged view of the chest and abdomen of the posture evaluation result diagram 1227. Torso diagram 1224A may facilitate a user's clearer view of whether a user has problems with scoliosis. Scoliosis result analysis display area 1224B may display a scoliosis result analysis. Scoliosis outcome analysis may include the severity (e.g., moderate) of scoliosis, and the like. When it is determined that the user has a scoliosis, the electronic device 100 may also recommend a course for improving the scoliosis to the user. Course recommendation display area 1224C may include courses recommended by electronic device 100 as described above.
As shown in fig. 12D, in response to the operation of the leg option 1225, the electronic device 100 may display a leg diagram 1225A, a leg result analysis display area 1225B on the user interface 1220. The leg portion diagram 1225A may be a partial method diagram of the leg portion in the posture evaluation result diagram 1227. Leg diagram 1225A may facilitate a user to more clearly view the shape of his or her legs. The leg type result analysis display area 1225B may have a leg type result analysis displayed therein. The leg result analysis may include the user's leg type (e.g., normal leg type, X-type leg, O-type leg, XO-type leg), the severity of the user's leg type being either an X-type leg or an O-type leg or an XO-type leg, and so forth. In some embodiments, the electronic device 100 may also recommend courses for improving the X-leg, or improving the O-leg, or improving the XO-leg to the user when it is determined that the user's leg is not a normal leg.
As shown in fig. 12E, in response to the operation of the humpback option 1226, the electronic device 100 can display a posture evaluation result diagram 1228, a back diagram 1226A, a humpback result analysis display area 1226B, a lesson recommendation display area 1226C on the user interface 1220. Wherein the posture assessment result diagram 1228 may be used to indicate a user's posture problem. The posture assessment results schematic 1228 may include a human body image with the user's body side facing the camera. The body image may include indicia (e.g., dots, lines, etc.) thereon that indicate the posture. In one possible implementation, when an evaluation option (e.g., humpback option 1226) corresponding to a physical problem that is evaluable using a side view of a user is in a selected state, electronic device 100 may display a physical evaluation result diagram 1228 shown in fig. 12E. For example, the human body image of the posture evaluation result diagram 1228 may include a curve indicating the degree of curvature of the back of the user. The above-mentioned marks may be referred to the embodiment shown in fig. 11A described above. And will not be described in detail here.
The back schematic 1226A may be a partial enlarged view of the back in the above-mentioned posture evaluation result schematic 1228. The back schematic 1226A may facilitate a user's more clear view of the morphology of the user's back from the side of the body. Humpback result analysis may be displayed in humpback result analysis display area 1226B. Analysis of humpback results may include humpback coefficient, severity of humpback (e.g., mild), and the like. When the user is determined to be humpbacked, electronic device 100 can also recommend lessons for improving humpback to the user. The course recommendation display area 1226C may contain courses recommended by the electronic device 100 described above.
The user interface 1210 illustrated in fig. 12A-12E is merely exemplary and should not be construed as limiting the present application.
As can be seen from the embodiments shown in fig. 12A to 12E, the electronic device 100 can display the results of one or more physical evaluations performed by the user, and recommend courses for improving physical problems to the user for the corresponding physical problems. In this way, the user can know his own posture condition according to the result obtained by performing posture assessment on the electronic device 100, and adjust his own posture in time according to the course recommended by the electronic device 100, thereby improving the health level.
It should be noted that, any feature in any embodiment of the present application, or any part of any feature may be combined under the condition that no contradiction or conflict occurs, and the combined technical solution is also within the scope of the embodiment of the present application.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (22)

1. An image-based gesture judging method, which is characterized in that the method is applied to electronic equipment, the electronic equipment comprises a camera, and the method comprises the following steps:
collecting a first image through the camera;
identifying a first object from the first image;
determining a first set of human keypoints for the first object;
obtaining a second image according to the first image and the first object, wherein the pixel value of the pixel of the area where the first object is located in the second image is different from the pixel values of other pixels in the second image;
and judging whether the gesture of the first object is a first gesture according to the first group of human body key points and the second image.
2. The method according to claim 1, wherein the determining whether the gesture of the first object is the first gesture according to the first set of human keypoints and the second image specifically comprises:
determining a first human body outline according to the second image, wherein the first human body outline comprises pixels at the junction of pixels with first pixel values and pixels with second pixel values on the second image, the pixels with the first pixel values on the second image are pixels in the area where the first object is located in the second image, and the pixels with the second pixel values in the second image are other pixels in the second image;
And judging whether the gesture of the first object is the first gesture according to the first group of human body key points and the first human body outline.
3. The method according to claim 2, wherein the determining whether the gesture of the first object is the first gesture according to the first set of human keypoints and the first human body contour specifically comprises:
determining a first group of edge points on the first human contour according to the first group of human key points;
and judging whether the gesture of the first object is the first gesture according to the first group of edge points.
4. A method according to any one of claims 1-3, wherein prior to the capturing of the first image by the camera, the method further comprises:
and displaying a first interface, wherein the first interface comprises first prompt information, and the first prompt information is used for prompting the first object to keep the first gesture.
5. The method of claim 4, wherein after determining whether the pose of the first object is the first pose based on the first set of human keypoints and the second image, the method further comprises:
And displaying a second interface under the condition that the gesture of the first object is not the first gesture, wherein the second interface comprises second prompt information which is determined according to a first difference between the gesture of the first object and the first gesture, and the second prompt information is used for prompting the first object to eliminate the first difference.
6. The method of claim 4 or 5, wherein prior to displaying the first interface, the method further comprises:
displaying a third interface, the third interface including a first option;
a first operation of the first option is received, the first interface being displayed by the electronic device based on the first operation.
7. The method according to any one of claims 1-6, further comprising:
and under the condition that the gesture of the first object is judged to be the first gesture, judging whether the first object has a first body state problem according to the first group of human body key points and the second image.
8. The method according to any one of claims 1-7, further comprising:
and displaying a fourth interface, wherein the fourth interface comprises third prompt information, and the third prompt information is used for prompting the first object to keep a second gesture.
9. The method of claim 8, wherein the third interface further comprises a second option, and wherein prior to the displaying the fourth interface, the method further comprises:
and receiving a second operation on the second option, wherein the fourth interface is displayed by the electronic equipment based on the second operation.
10. The method of claim 8 or 9, wherein after the fourth interface is displayed, the method further comprises:
collecting a third image through the camera;
identifying the first object from the third image;
determining a second set of human keypoints for the first object;
and judging whether the gesture of the first object is a second gesture according to the second group of human body key points, wherein the second gesture is different from the first gesture.
11. The method according to claim 10, wherein the method further comprises:
obtaining a fourth image according to the third image and the first object under the condition that the gesture of the first object is judged to be the second gesture, wherein the pixel value of a pixel in the area where the first object is located in the fourth image is different from the pixel values of other pixels in the fourth image;
Judging whether the first object has a second state problem according to the second group of human body key points and the fourth image, wherein the second state problem is different from the first state problem.
12. The method of any of claims 2-11, wherein the first gesture comprises: the distance that the both arms are arranged on two sides of the body and keep with the waist is in a first distance range, and the first group of human body key points comprise: a first left wrist point and the first right wrist point; the determining, according to the first group of human body key points and the first human body contour, whether the gesture of the first object is the first gesture specifically includes:
and judging whether the first straight line where the first left wrist point and the first right wrist point are positioned has 6 intersection points with the first human body outline.
13. The method of any of claims 2-12, wherein the first gesture comprises: the legs are naturally upright and closed, and the first group of human body key points comprise: a first left hip point, a first left knee point, a first left ankle point, a first right hip point, a first right knee point, a first right ankle point; the determining, according to the first group of human body key points and the first human body contour, whether the gesture of the first object is the first gesture specifically includes:
Judging whether a first distance obtained by subtracting the third left leg distance from the first left leg distance by adding the second left leg distance to the first left leg distance is smaller than a first threshold value according to a first left leg distance between the first left hip point and the first left knee point, a second left leg distance between the first left knee point and the first left ankle point and a third left leg distance between the first left hip point and the first left ankle point; judging whether a second distance obtained by subtracting the third right leg distance from the first right leg distance by adding the second right leg distance to the first right leg distance is smaller than the first threshold value according to a first right leg distance between the first right hip point and the first right knee point, a second right leg distance between the first right knee point and the first right ankle point and a third right leg distance between the first right hip point and the first right ankle point;
judging whether a first line segment between the first left knee point and the first right knee point and the first human body outline have 2 intersection points, and judging whether a third distance between the first line segment and the 2 intersection points of the first human body outline is smaller than a second threshold value when the first line segment and the first human body outline have 2 intersection points; judging whether a second line segment between the first left ankle point and the first right ankle point and the first human body outline have 2 intersection points, and judging whether a fourth distance between the second line segment and the 2 intersection points of the first human body outline is smaller than a third threshold value when the second line segment and the first human body outline have 2 intersection points.
14. The method of any one of claims 7-13, wherein the first state problem comprises one or more of: high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs and scoliosis.
15. The method of claim 14, wherein the first set of human keypoints comprises a first left shoulder point and a first right shoulder point, the first set of edge points comprising a first left shoulder edge point and a first right shoulder edge point, the first left shoulder edge point determined from a left shoulder region of the first left shoulder point on the first human contour, the first right shoulder edge point determined from a right shoulder region of the first right shoulder point on the first human contour;
judging whether the first object has a first body state problem according to the first group of human body key points and the second image, specifically comprising:
determining a first included angle between a straight line where the first left shoulder edge point and the first right shoulder edge point are located and a horizontal line;
and judging whether the first included angle is larger than a first angle threshold value.
16. The method of claim 14 or 15, wherein the first set of human keypoints comprises a first left knee point, a first right knee point, a first left ankle point, and a first right ankle point, the first set of edge points comprising a first left knee inner edge point, a first right knee inner edge point, a first left foot inner edge point, a first right foot inner edge point, M pairs of calf inner edge points; the first left knee inner edge point and the first right knee inner edge point are intersections of a line segment between the first left knee point and the first right knee point and the first human body contour, and the first left foot inner edge point and the first right foot inner edge point are intersections of a line segment between the first left ankle point and the first right ankle point and the first human body contour; a pair of inner-leg edge points of the M pairs of inner-leg edge points comprise a left inner-leg edge point and a right inner-leg edge point which are positioned at the same height, wherein the left inner-leg edge point is a pixel between the first left inner-knee edge point and the first inner-left-leg edge point on the first human body contour, the right inner-leg edge point is a pixel between the first right inner-knee edge point and the first inner-right-leg edge point on the first human body contour, and M is a positive integer;
Judging whether the first object has a first body state problem according to the first group of human body key points and the second image, specifically comprising:
determining a fifth distance between the first left knee inner edge point and the first left foot inner edge point, and a sixth distance between the first left knee inner edge point and the first right knee inner edge point, and determining whether a first ratio of the sixth distance to the fifth distance is greater than a fourth threshold;
determining a seventh distance between the first left foot inner edge point and the first right foot inner edge point if the first ratio is less than or equal to the fourth threshold, and determining whether a second ratio of the seventh distance to the sixth distance is greater than a fifth threshold;
and judging whether the ratio of the distance between any pair of the inner-calf edge points in the M pairs of the inner-calf edge points to the sixth distance is larger than a sixth threshold value or not under the condition that the second ratio is smaller than or equal to the fifth threshold value.
17. The method of any of claims 14-16, wherein the first set of human keypoints comprises a first left shoulder point, a first right shoulder point, a first left hip point, and a first right hip point, the first set of edge points comprising a first left shoulder edge point, a first right shoulder edge point, a first left hip edge point, and a first right hip edge point, the first left shoulder edge point determined from a left shoulder region of the first left shoulder point on the first human contour, the first right shoulder edge point determined from a right shoulder region of the first right shoulder point on the first human contour, the first left hip edge point determined from a left waist region of the first left hip point on the first human contour, the first right hip point determined from a right waist region of the first right hip point on the first human contour;
Judging whether the first object has a first body state problem according to the first group of human body key points and the second image, specifically comprising:
determining an eighth distance between the first left shoulder edge point and the first left hip edge point, determining a ninth distance between the first right shoulder edge point and the first right hip edge point;
it is determined whether a third ratio of a larger one of the eighth distance and the ninth distance is greater than a seventh threshold.
18. The method of any one of claims 7-13, wherein the first state problem comprises: humpback.
19. The method of claim 18, wherein the first set of edge points includes edge points that make up a first curve segment of the upper back region of the first human contour;
judging whether the first object has a first body state problem according to the first group of human body key points and the second image, specifically comprising:
determining a first area of a closed region formed by a first straight line segment between two end points of the first curve segment and the first curve segment, determining a second area of a circle taking the first straight line segment as a diameter, and judging whether a fourth ratio of the first area to the second area is larger than an eighth threshold value; or alternatively, the process may be performed,
And carrying out smoothing treatment on the first curve segment to obtain a first smooth curve segment, and judging whether the maximum curvature of the first smooth curve segment is larger than a ninth threshold value.
20. An electronic device comprising a camera for capturing images, a memory for storing a computer program, and a processor for invoking the computer program to cause the electronic device to perform the method of any of claims 1-19.
21. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-19.
22. A computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-19.
CN202210073959.2A 2022-01-21 2022-01-21 Gesture judging method based on image and electronic equipment Pending CN116503892A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210073959.2A CN116503892A (en) 2022-01-21 2022-01-21 Gesture judging method based on image and electronic equipment
PCT/CN2023/070938 WO2023138406A1 (en) 2022-01-21 2023-01-06 Image-based posture determination method, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210073959.2A CN116503892A (en) 2022-01-21 2022-01-21 Gesture judging method based on image and electronic equipment

Publications (1)

Publication Number Publication Date
CN116503892A true CN116503892A (en) 2023-07-28

Family

ID=87320761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210073959.2A Pending CN116503892A (en) 2022-01-21 2022-01-21 Gesture judging method based on image and electronic equipment

Country Status (2)

Country Link
CN (1) CN116503892A (en)
WO (1) WO2023138406A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101511450B1 (en) * 2013-12-09 2015-04-10 포항공과대학교 산학협력단 Device and method for tracking the movement of an object using ridge data
US11036975B2 (en) * 2018-12-14 2021-06-15 Microsoft Technology Licensing, Llc Human pose estimation
CN110495889B (en) * 2019-07-04 2022-05-27 平安科技(深圳)有限公司 Posture evaluation method, electronic device, computer device, and storage medium
CN111079513B (en) * 2019-10-28 2022-01-18 珠海格力电器股份有限公司 Posture reminding method and device, mobile terminal and storage medium
CN111753721A (en) * 2020-06-24 2020-10-09 上海依图网络科技有限公司 Human body posture recognition method and device
CN112070031A (en) * 2020-09-09 2020-12-11 中金育能教育科技集团有限公司 Posture detection method, device and equipment

Also Published As

Publication number Publication date
WO2023138406A1 (en) 2023-07-27

Similar Documents

Publication Publication Date Title
CN110163048B (en) Hand key point recognition model training method, hand key point recognition method and hand key point recognition equipment
CN109495688B (en) Photographing preview method of electronic equipment, graphical user interface and electronic equipment
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
JP5873442B2 (en) Object detection apparatus and object detection method
WO2021036568A1 (en) Fitness-assisted method and electronic apparatus
CN110163806A (en) A kind of image processing method, device and storage medium
CN113382154A (en) Human body image beautifying method based on depth and electronic equipment
CN111400605A (en) Recommendation method and device based on eyeball tracking
CN110765525B (en) Method, device, electronic equipment and medium for generating scene picture
CN113610750A (en) Object identification method and device, computer equipment and storage medium
CN116048244B (en) Gaze point estimation method and related equipment
CN114078278A (en) Method and device for positioning fixation point, electronic equipment and storage medium
JP6651086B1 (en) Image analysis program, information processing terminal, and image analysis system
CN110705438B (en) Gait recognition method, device, equipment and storage medium
CN108537162A (en) The determination method and apparatus of human body attitude
CN111768507B (en) Image fusion method, device, computer equipment and storage medium
CN113538321A (en) Vision-based volume measurement method and terminal equipment
CN116503892A (en) Gesture judging method based on image and electronic equipment
CN113536834A (en) Pouch detection method and device
CN113342157B (en) Eyeball tracking processing method and related device
WO2021036562A1 (en) Prompting method for fitness training, and electronic device
CN114111704B (en) Method and device for measuring distance, electronic equipment and readable storage medium
CN112085795A (en) Article positioning method, device, equipment and storage medium
CN114359335A (en) Target tracking method and electronic equipment
CN113705283A (en) Interference prompting method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination