WO2023138406A1 - 基于图像的姿态判断方法及电子设备 - Google Patents

基于图像的姿态判断方法及电子设备 Download PDF

Info

Publication number
WO2023138406A1
WO2023138406A1 PCT/CN2023/070938 CN2023070938W WO2023138406A1 WO 2023138406 A1 WO2023138406 A1 WO 2023138406A1 CN 2023070938 W CN2023070938 W CN 2023070938W WO 2023138406 A1 WO2023138406 A1 WO 2023138406A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
posture
human body
electronic device
user
Prior art date
Application number
PCT/CN2023/070938
Other languages
English (en)
French (fr)
Inventor
马春晖
黄磊
陈霄汉
赵杰
魏鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023138406A1 publication Critical patent/WO2023138406A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application relates to the technical field of terminals, and in particular to an image-based posture judgment method and electronic equipment.
  • the present application provides an image-based posture judgment method and electronic equipment.
  • the method can judge the posture of the user by identifying key points and contours of the user's human body in the image, and improve the accuracy of posture judgment.
  • the present application provides an image-based gesture judgment method.
  • the method can be applied to electronic devices.
  • the electronic device contains a camera.
  • the electronic device can collect the first image through the camera.
  • the electronic device can recognize the first object according to the first image, and determine a first group of key points of the human body of the first object.
  • the electronic device may obtain the second image according to the first image and the first object, wherein the pixel values of the pixels in the region where the first object is located in the second image are different from the pixel values of other pixels in the second image.
  • the electronic device can determine whether the posture of the first object is the first posture.
  • the electronic device can perform binarization processing on the first image to obtain the second image, and distinguish the area where the first object is located in the first image from the area that does not contain the first object.
  • the aforementioned second image may reflect the outline of the first object.
  • the first object maintains a different pose, and the outline of the first object is also different.
  • the electronic device can improve the accuracy of posture judgment by combining the above-mentioned first group of human body key points and the above-mentioned second image to judge the user's posture.
  • the specific method for judging whether the posture of the first object is the first posture based on the first set of human body key points and the second image may be as follows: the electronic device may determine the first human body contour according to the second image, the first human body contour includes pixels at the junction of pixels with the first pixel value and pixels with the second pixel value on the second image, the pixels with the first pixel value on the second image are pixels in the area where the first object is located in the second image, and the pixels with the second pixel value in the second image are other pixels in the second image. According to the first group of human body key points and the first human body outline, the electronic device can determine whether the posture of the first object is the first posture.
  • the electronic device can determine the first human body outline from the above second image, and determine whether the user's posture is the first posture by combining the first set of human body key points and the first human body contour. The position of each edge point on the first human body contour is uniquely determined. The above method can improve the accuracy of posture judgment.
  • the method for judging whether the posture of the first object is the first posture according to the first group of human body key points and the first human body contour may be as follows: the electronic device may determine a first group of edge points on the first human body contour according to the first group of human body key points, and judge whether the posture of the first object is the first posture according to the first group of edge points.
  • the electronic device may further display a first interface, the first interface includes first prompt information, and the first prompt information is used to remind the first object to maintain the first posture.
  • the electronic device may further display a second interface, the second interface includes second prompt information, the second prompt information is determined based on the first difference between the posture of the first object and the first posture, and the second prompt information is used to prompt the first object to eliminate the first difference.
  • the electronic device may display relevant prompt information on the interface to prompt the first object to maintain the first posture.
  • the electronic device may guide the user to complete an action corresponding to the first posture according to the difference between the posture of the first object and the first posture, so as to maintain the first posture.
  • the above method can help the user maintain the first posture.
  • the electronic device may also prompt the user to adjust the posture through voice prompts and other means to complete the action corresponding to the first posture by processing and displaying relevant prompt information on the interface.
  • the electronic device before displaying the first interface, may further display a third interface, where the third interface includes the first option.
  • the electronic device receives a first operation on a first option.
  • the foregoing first interface may be displayed by the electronic device based on the first operation.
  • the electronic device may prompt the user to maintain the first posture according to the user's selection of the above-mentioned first option.
  • the above-mentioned first option may be an option for assessing one or more postural problems of high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs, and scoliosis.
  • the electronic device may display the above-mentioned first interface to prompt the user to maintain the first posture.
  • the first posture can be, for example, facing the camera (that is, the body is facing the camera), the arms are placed on both sides of the body and a certain distance is kept from the body, and the legs are naturally upright and close together.
  • the above-mentioned first option may be an option for evaluating hunchback.
  • the electronic device may display the above-mentioned first interface to prompt the user to maintain the first gesture.
  • the first posture may be, for example, a posture in which the side of the body faces the camera and stands naturally.
  • the electronic device may instruct the user to complete corresponding actions according to the user's selection.
  • the user can only complete the action corresponding to this posture without completing the actions required to evaluate other posture problems.
  • the electronic device may determine whether the first object has the first posture problem according to the first set of key points of the human body and the second image.
  • the electronic device may determine the first human body contour of the first object according to the second image. According to the above-mentioned first set of human body key points and the first human body outline, the electronic device can determine whether the first object has the first body posture problem.
  • the electronic device may determine a first group of edge points on the first human body contour according to the first group of key points of the human body, and determine whether the first subject has the first posture problem according to the first group of edge points.
  • the electronic device uses the first group of edge points to determine whether the user's gesture is a gesture with a posture problem, and can more accurately determine whether the user has a posture problem. This can help users better understand their posture and correct posture problems in time.
  • the electronic device may further display a fourth interface, the fourth interface includes third prompt information, and the third prompt information is used to prompt the first object to maintain the second posture.
  • the above-mentioned third interface further includes a second option.
  • the electronic device Before displaying the fourth interface, the electronic device also receives a second operation on the second option.
  • the foregoing fourth interface may be displayed by the electronic device based on the second operation.
  • the electronic device may directly display the above fourth interface.
  • the electronic device may also capture a third image through the camera.
  • the electronic device can identify the first object according to the third image, and determine a second group of key points of the human body of the first object. According to the second group of key points of the human body, the electronic device can determine whether the posture of the first object is the second posture. The second posture is different from the first posture.
  • the electronic device may display a fifth interface.
  • the above-mentioned fifth interface may include fourth prompt information.
  • the above fourth prompt information may be determined according to the second difference between the posture of the first object and the second posture.
  • the above-mentioned second prompt information is used to prompt the first object to eliminate the second difference.
  • the electronic device may obtain a fourth image according to the third image and the first object, wherein the pixel values of the pixels in the area where the first object is located in the fourth image are different from the pixel values of other pixels in the fourth image.
  • the electronic device can determine whether the first subject has a second posture problem, and the second posture problem is different from the first posture problem.
  • the electronic device can judge the posture of the first object by combining the above-mentioned second group of key points of the human body and the fourth image, so as to determine whether the first object has a posture problem. This can improve the accuracy of posture assessment results. Thereby helping users better understand their own posture and correct posture problems in time.
  • the electronic device may determine the second human body contour according to the foregoing fourth image.
  • the above-mentioned second human body contour includes pixels at the junction of pixels with the first pixel value and pixels with the second pixel value on the fourth image.
  • the pixels with the first pixel value in the fourth image are the pixels in the area where the first object is located in the fourth image.
  • the pixels with the second pixel value in the fourth image are other pixels in the fourth image (that is, the pixels in the area not containing the first object in the fourth image).
  • the electronic device can determine whether the first subject has the second body posture problem according to the second group of key points of the human body and the above-mentioned second human body outline.
  • the electronic device may determine a second set of edge points on the second human body outline according to the second set of key points of the human body, and determine whether the first subject has the second posture problem according to the second set of edge points.
  • the first posture includes: placing both arms on both sides of the body and maintaining a distance from the waist within a first distance range.
  • the first group of key points of the human body includes: a first left wrist point and a first right wrist point.
  • the specific method for judging whether the posture of the first object is the first posture according to the first group of key points of the human body and the first human body contour can be as follows: the electronic device can judge whether there are six intersection points between the first straight line where the first left wrist point and the first right wrist point are located and the first human body contour. Wherein, there are 6 intersection points between the above-mentioned first straight line and the above-mentioned first human body contour, which means that the user's arms are placed on both sides of the body and the distance kept from the waist is within the first distance range.
  • the first posture includes: both legs are naturally upright and close together.
  • the first group of key points of the human body includes: the first left hip point, the first left knee point, the first left ankle point, the first right hip point, the first right knee point, and the first right ankle point.
  • the specific method for judging whether the posture of the first object is the first posture according to the first set of human body key points and the first human body contour may be as follows: the electronic device may judge whether the first distance obtained by adding the first left leg distance plus the second left leg distance minus the third left leg distance is less than the first threshold according to the first left leg distance between the first left hip point and the first left knee point, the second left leg distance between the first left knee point and the first left ankle point, and the third left leg distance between the first left hip point and the first left ankle point.
  • the electronic device may determine whether the second distance obtained by adding the first right leg distance plus the second right leg distance minus the third right leg distance is less than the first threshold according to the first right leg distance between the first right hip point and the first right knee point, the second right leg distance between the first right knee point and the first right ankle point, and the third right leg distance between the first right hip point and the first right ankle point. Wherein, the above-mentioned first distance and the second distance are both smaller than the first threshold, indicating that the user's legs are naturally upright.
  • the electronic device may determine whether there are two intersection points between the first line segment between the first left knee point and the first right knee point and the first human body contour. When there are two intersection points between the first line segment and the first human body contour, the electronic device may determine whether a third distance between the two intersection points between the first line segment and the first human body contour is smaller than a second threshold. The electronic device can determine whether there are two intersection points between the second line segment between the first left ankle point and the first right ankle point and the first human body contour. When there are two intersection points between the second line segment and the first human body contour, the electronic device may determine whether a fourth distance between the two intersection points between the second line segment and the first human body contour is smaller than a third threshold.
  • intersection points between the first line segment and the first human body contour or the third distance is less than the second threshold, or there are less than 2 intersection points between the second line segment and the first human body contour, or the fourth distance is smaller than the third threshold, it may indicate that the user's legs are close together.
  • the electronic device may determine that the user's legs have been moved together. For example, when the user's legs of any one of normal leg type, XO type leg, and X type leg type are close together, the number of intersections between the first line segment and the first human body contour is less than 2, or the above third distance is less than the second threshold. In this way, the electronic device does not need to judge whether there is an intersection point between the second line segment and the first human body contour. This can improve the efficiency of judgment.
  • the electronic device may determine whether the user's legs are closed by using the first left ankle point and the first right ankle point. For example, a user with O-shaped legs does not satisfy the condition that the number of intersections between the first line segment and the first human body contour is less than 2, or that the third distance is less than the second threshold, but meets the condition that the number of intersections between the second line segment and the first human body contour is less than 2, or that the fourth distance is less than the third threshold.
  • the electronic device may first determine whether there are two intersection points between the second line segment and the first human body contour. In a case where there are two intersection points between the second line segment and the first human body contour, the electronic device may determine whether a fourth distance between the two intersection points is smaller than a third threshold. When it is determined that the fourth distance is greater than or equal to the third distance, the electronic device may determine whether there are two intersection points between the first line segment and the first human body contour. In a case where there are two intersection points between the first line segment and the first human body contour, the electronic device may determine whether a third distance between the two intersection points is smaller than a second threshold.
  • the electronic device may also determine whether the user's body is facing the camera according to the first group of key points of the human body.
  • the first group of key points of the human body may include a first left hip point and a first right hip point.
  • the electronic device can determine whether the distance between the first left hip point and the first right hip point is greater than a preset distance threshold.
  • the distance between the first left hip point and the first right hip point is greater than a preset distance threshold, which may indicate that the user's body is facing the camera.
  • the first posture problem may include one or more of the following: high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs, and scoliosis.
  • the first group of key points of the human body includes a first left shoulder point and a first right shoulder point
  • the first group of edge points includes a first left shoulder edge point and a first right shoulder edge point
  • the first left shoulder edge point is determined according to the left shoulder area of the first left shoulder point on the first human body contour
  • the first right shoulder edge point is determined according to the right shoulder area of the first right shoulder point on the first human body contour
  • the electronic device may determine the positions of the first group of key points of the human body and edge points (that is, pixels) on the first human body outline in the same pixel coordinate system.
  • the electronic device may determine, on the straight line A that passes through the above-mentioned first left shoulder point and is perpendicular to the horizontal line, an edge point that has a value on the vertical axis (ie, the y-axis) of the pixel coordinate system on the first human body contour that is greater than the value on the y-axis of the first left shoulder point and that is closest to the first left shoulder point.
  • the electronic device may determine this one edge point as the first left shoulder edge point.
  • the direction of the above-mentioned straight line A may be the direction of the y-axis.
  • the electronic device determines that the value on the y-axis of the pixel coordinate system on the first human body contour is greater than the value on the y-axis of the first right shoulder point and is the closest edge point to the first right shoulder point.
  • the electronic device may determine this one edge point as the first right shoulder edge point.
  • the specific method for judging whether the first object has the first posture problem based on the first group of human body key points and the second image above can be as follows: determine the first angle between the line where the first left shoulder edge point and the first right shoulder edge point are located and the horizontal line, and determine whether the first included angle is greater than the first angle threshold.
  • the fact that the above-mentioned first included angle is greater than the first angle threshold may indicate that the user has high and low shoulders.
  • the electronic device can determine the edge point on the shoulder of the human body outline according to the key points of the human body, and use the edge point to compare the direction of the user's shoulders for posture assessment with the direction of the normal posture of the human body's shoulders, and obtain the user's high and low shoulder evaluation results. Since the position of the edge point is determined, the above method can reduce the influence of the position fluctuation of the detected key points of the human body on the high and low shoulder evaluation results, and improve the accuracy of the high and low shoulder evaluation results.
  • the first group of human body key points includes the first left knee point, the first right knee point, the first left ankle point and the first right ankle point
  • the first group of edge points includes the first left knee inner edge point, the first right knee inner edge point, the first left foot inner edge point, the first right foot inner edge point, and the M pair of calf inner edge points
  • the first right crus inner edge point is the intersection point of the line segment between the first left ankle point and the first right ankle point and the first human body contour
  • a pair of calf inner edge points in the M pair of calf inner edge points includes the left calf inner edge point and the right lower leg inner edge point at the same height
  • the left calf inner edge point is the pixel between the first left knee inner edge point and the first left foot inner edge point on the first human body contour
  • the right calf inner edge point is the pixel between the first right knee inner edge point and the first right foot inner edge point on the first human body contour
  • M is positive integer
  • the specific method for judging whether the first object has the first posture problem based on the first group of human body key points and the second image above can be as follows: determining the fifth distance between the first left knee inner edge point and the first left foot inner edge point, and the sixth distance between the first left knee inner edge point and the first right knee inner edge point, and judging whether the first ratio of the sixth distance to the fifth distance is greater than the fourth threshold.
  • the above-mentioned first ratio greater than the fourth threshold may indicate that the user has O-shaped legs.
  • the fourth threshold determines the seventh distance between the first left inner edge point and the first right inner edge point, and determine whether the second ratio of the seventh distance to the sixth distance is greater than the fifth threshold.
  • the aforementioned second ratio greater than the fifth threshold may indicate that the user has X-shaped legs.
  • the electronic device may determine that the user's leg shape is a normal leg shape.
  • the value of the sixth distance may be 0. If the western segment between the first left ankle point and the first right ankle point does not intersect with the first human body contour, then the value of the seventh distance may be 0.
  • the electronic device can perform normalization processing on the distance between the edge points on the inner sides of the knees, the distance between the edge points on the inner ankles of the feet, and the distance between the edge points on the inner sides of the left calf and right calf. That is, the user's leg shape is judged by the ratio of the above-mentioned distance to the sixth distance.
  • the above normalization processing can make the above method for evaluating leg shape applicable to different users. Generally, if the legs of two users are O-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the knees.
  • the legs of two users are X-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the inner ankles of the feet. If the legs of two users are XO-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the left calf and the right calf. The above method can more accurately evaluate the user's leg shape.
  • the first set of key points of the human body includes a first left shoulder point, a first right shoulder point, a first left hip point, and a first right hip point.
  • the first set of edge points includes a first left shoulder edge point, a first right shoulder edge point, a first left hip edge point, and a first right hip edge point.
  • the first left shoulder edge point is determined according to the left shoulder area of the first left shoulder point on the first human body contour.
  • the left hip point is determined on the left waist area on the first human body contour
  • the first right hip edge point is determined on the basis of the first right hip point on the right waist area on the first human body contour.
  • the specific method for judging whether the first object has the first posture problem based on the first group of human body key points and the second image above can be as follows: determine the eighth distance between the first left shoulder edge point and the first left hip edge point, and determine the ninth distance between the first right shoulder edge point and the first right hip edge point. It is judged whether a third ratio of the smaller one to the larger one of the eighth distance and the ninth distance is greater than a seventh threshold.
  • the aforementioned third ratio greater than the seventh threshold may indicate that the user has scoliosis.
  • the first posture problem includes: hunchback.
  • the first set of edge points includes edge points constituting the first curved segment of the upper back region of the first body contour.
  • the specific method for judging whether the first object has the first posture problem according to the first group of human body key points and the second image can be as follows: determine the first area of the closed area formed by the first straight line segment and the first curved line segment between the two endpoints of the first curved line segment, determine the second area of the circle whose diameter is the first straight line segment, and determine whether the fourth ratio of the first area to the second area is greater than the eighth threshold.
  • the aforementioned fourth ratio greater than the eighth threshold may indicate that the user is hunchbacked.
  • the fact that the maximum area of the first smooth curve segment is greater than the ninth threshold may indicate that the user is hunched over.
  • the electronic device may determine the orientation of the body of the first object. For example, the electronic device may determine the orientation of the body of the first object according to the values of the first left ankle point and the first right ankle point on the y-axis of the pixel coordinate system. Wherein, if the y value of the left ankle point is smaller than the y value of the right ankle point, the electronic device may determine that the user's body is facing the negative direction of the x axis in the pixel coordinate system.
  • the electronic device may determine that the user's body is facing the positive direction of the x axis in the pixel coordinate system. Further, the electronic device may determine the above-mentioned first curve segment in the back area of the first human body contour according to the first group of key points.
  • the hunchback is mainly manifested in the user's back bulge.
  • the electronic device can first distinguish the chest area and the back area in the outline of the human body, and then select edge points in the back area to determine whether the user is hunchbacked. This can avoid confusing the edge points of the chest area and the back area during the judgment process, and improve the accuracy of the hunchback detection result.
  • the position of the edge point of the back area on the human body contour is determined, and the edge point of the back area can be used to more accurately determine whether the user has a hunchback and the severity of the hunchback.
  • the present application provides an electronic device, which may include a camera, a memory, and a processor, wherein the camera may be used to capture images, the memory may be used to store a computer program, and the processor may be used to invoke the computer program, so that the electronic device executes any possible implementation method in the first aspect.
  • the present application provides a computer-readable storage medium, including instructions.
  • the instructions When the instructions are run on an electronic device, the electronic device executes any possible implementation method in the first aspect.
  • the present application provides a computer program product.
  • the computer program product may include computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device executes any possible implementation method in the first aspect.
  • the present application provides a chip, the chip is applied to an electronic device, and the chip includes one or more processors, and the processor is used to invoke computer instructions to make the electronic device execute any possible implementation method in the first aspect.
  • the electronic device provided by the second aspect, the computer-readable storage medium provided by the third aspect, the computer program product provided by the fourth aspect, and the chip provided by the fifth aspect are all used to execute the method provided by the embodiment of the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
  • Fig. 1 is a schematic diagram of a different leg type provided by the embodiment of the present application.
  • Fig. 2 is a schematic diagram of key points of a human body provided by the embodiment of the present application.
  • Figure 3A and Figure 3B are a scene diagram for judging high and low shoulders provided by the embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.
  • FIG. 5 is a software structural block diagram of an electronic device 100 provided by an embodiment of the present application.
  • FIG. 6A is a flow chart of a method for collecting posture assessment images by the electronic device 100 provided in the embodiment of the present application;
  • FIG. 6B is a schematic diagram of an electronic device 100 performing image processing according to an embodiment of the present application.
  • FIG. 6C is a schematic diagram of a binarized image provided by the embodiment of the present application.
  • FIG. 6D is a schematic diagram of judging whether a user's gesture is a posture evaluation gesture provided by an embodiment of the present application.
  • FIGS. 7A to 7D are schematic diagrams of scenes in which some electronic devices 100 prompt the user to adjust posture provided by the embodiment of the present application;
  • Figures 8A to 8I are schematic diagrams of some scenarios for assessing the user's posture provided by the embodiment of the present application.
  • FIG. 9 is another schematic diagram of judging whether a user's gesture is a posture evaluation gesture provided by an embodiment of the present application.
  • 10A to 10C are schematic diagrams of other electronic devices 100 prompting the user to adjust the posture provided by the embodiment of the present application;
  • FIG. 11A and FIG. 11B are schematic diagrams of other scenarios for body posture assessment provided by the embodiments of the present application.
  • 12A to 12E are schematic diagrams of user interfaces of some body posture assessment results provided by the embodiments of the present application.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification do not necessarily all refer to the same embodiment, but mean “one or more, but not all,” unless specifically emphasized otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the term “connected” includes both direct and indirect connections, unless otherwise stated. "First” and “second” are used for descriptive purposes only, and should not be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • the embodiment of the present application provides an image-based posture judgment method to help users understand their own posture problems. In this way, the user can correct his posture problems in time, so as to maintain good health.
  • High and low shoulders refer to the phenomenon that the height of the left and right shoulders of the human body is different.
  • the height of the shoulders of normal users is usually the same.
  • the user's left shoulder is higher than the right shoulder or the right shoulder is higher than the left shoulder, the user has a posture problem of high and low shoulders. Improper backpack posture or prolonged use of a shoulder bag can lead to high or low shoulders.
  • FIG. 1 is a schematic diagram of normal leg, X-shaped leg, O-shaped leg and XO-shaped leg.
  • the line shape of the legs can reflect the shape of the user's legs. If the user has normal legs, when the legs are naturally upright and close together, the user's knees can be close to each other, the inner ankles of the feet can be close to each other, the lines of the legs are relatively upright and close to each other, and the inner part of the left calf and right calf can be close to each other.
  • X-shaped legs when the legs are naturally upright and close together, the user's knees can be close together, and the inner ankles of the feet are separated and cannot be close together.
  • the line of the left leg and the right leg presents an "X" shape.
  • X-shaped legs are also known as "knee valgus".
  • the distance between the inner ankles of the two feet is more than 1.5 cm, which can be considered as X-shaped legs.
  • the above-mentioned closeness of the knees may include that the inner sides of the knees touch each other, and the distance between the inner sides of the knees is particularly small (for example, less than a preset distance threshold).
  • O-shaped legs when the legs are naturally upright and close together, the inner ankles of the user's feet can be close to each other, but the knees are separated and cannot be close to each other, and the line of the left leg and the right leg presents an "O" shape.
  • O-shaped legs are also known as "knee varus".
  • the user has XO-shaped legs, when the legs are naturally upright and close together, the user's knees can be close to each other, and the inner ankles of both feet can be close to each other, but the left calf and right calf are separated and cannot be close to each other, and the calf is turned outward.
  • leg deformities such as the above-mentioned X-shaped legs or O-shaped legs or XO-shaped legs.
  • Scoliosis refers to a sideways (eg, left or right) curvature of a person's spine, and the curved shapes include S-shape and C-shape.
  • the spine of a user with normal posture is usually upright when viewed from the user's front or back.
  • Hunchback refers to the shape change caused by thoracic kyphosis, which is a scene of spinal deformation. Hunchback is the appearance of a raised back with excessive curvature in the direction of the back.
  • posture problems can also include neck protraction, pelvic forward tilt, long or short legs, knee hyperextension, etc.
  • the electronic device may determine the gesture of the user by identifying key points of the human body.
  • the key points of the human body can also be called human bone nodes, human bone points, and the like.
  • FIG. 2 exemplarily shows a position distribution diagram of key points of a human body provided by an embodiment of the present application.
  • the key points of the human body can include: head point, neck point, left shoulder point, right shoulder point, right elbow point, left elbow point, right wrist point, left wrist point, abdomen point, right hip point, left hip point, left and right hip middle point, right knee point, left knee point, right ankle point, left ankle point. It is not limited to the above-mentioned key points of the human body, and other key points of the human body may also be included in the embodiments of the present application.
  • the above key points of the human body may be two-dimensional (2 dimensions, 2D) key points, or three-dimensional (3 dimensions, 3D) key points.
  • the aforementioned 2D key points may represent key points distributed on a 2D plane.
  • the above-mentioned 2D plane may be an image plane where an image for identifying key points of a human body is located.
  • the aforementioned 3D key points may represent key points distributed in 3D space.
  • 3D keypoints also include depth information. The above depth information can reflect the distance of the key points of the human body relative to the camera that collects the image.
  • an electronic device may utilize a human body key point detection model to identify human body key points.
  • the above human body key point detection model may be a model based on a neural network. Input an image containing a portrait into the key point detection model of the human body, and the key point detection model of the human body can output the position information of the key points of the human body of the portrait on the image.
  • the key points of the human body can reflect the posture of the human body.
  • the posture of the user is usually different from that of a human body with a normal posture.
  • the above-mentioned human body with a normal posture can refer to a human body without posture problems such as high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs, scoliosis, and hunchback.
  • the electronic device can identify key points of the user's human body by collecting an image of the user in a state of standing naturally with legs close together. According to the above key points of the human body, the electronic device can determine whether the user's posture is a posture with a specific posture problem.
  • the above-mentioned images of key points of the user's identified human body may include a front view and a side view of the user.
  • the foregoing front view may be an image collected by the camera when the user faces the camera.
  • the above side view may be an image captured by the camera when the direction of the user's body plane and the direction of the optical axis of the camera are less than a preset angle (such as 5°, etc.).
  • a preset angle such as 5°, etc.
  • the electronic device can determine whether the user has high or low shoulders according to the points of the user's left shoulder and right shoulder. Wherein, if the height difference between the left shoulder point and the right shoulder point is greater than a preset high and low shoulder threshold (such as 0.5 cm, etc.), the electronic device can determine that the user has high and low shoulders.
  • a preset high and low shoulder threshold such as 0.5 cm, etc.
  • the embodiment of the present application does not limit the aforementioned high and low shoulder thresholds.
  • the electronic device can judge whether the user has X-shaped legs according to the distance between the left ankle point and the right ankle point. Wherein, if the distance between the left ankle point and the right ankle point is greater than a preset X-shaped leg threshold (such as 1.5 cm, etc.), the electronic device can determine that the user has X-shaped legs.
  • a preset X-shaped leg threshold such as 1.5 cm, etc.
  • the embodiment of the present application does not limit the above X-shaped leg threshold.
  • the electronic device can judge whether the user has O-shaped legs according to the distance between the left knee point and the right knee point. Wherein, if the distance between the left knee point and the right knee point is greater than a preset O-shaped leg threshold (such as 2 cm, etc.), the electronic device can determine that the user has O-shaped legs.
  • a preset O-shaped leg threshold such as 2 cm, etc.
  • the embodiment of the present application does not limit the O-shaped leg threshold.
  • the electronic device can judge whether the user has XO-shaped legs according to the distance between the line segment obtained by connecting the left knee point and the left ankle point, and the line segment obtained by connecting the right knee point and the right ankle point. If the minimum distance between the line segment obtained by connecting the left knee point and the left ankle point, and the line segment obtained by connecting the right knee point and the right ankle point is greater than the preset XO-type leg threshold (such as 2 cm, etc.), the electronic device can determine that the user has XO-type legs.
  • the embodiment of the present application does not limit the above-mentioned XO leg threshold.
  • the electronic device can determine whether the user has scoliosis according to the connection line between the neck point and the abdomen point, and the connection line between the abdomen point and the middle points of the left and right hips. If the included angle between the line connecting the neck point and the abdomen point, and the line connecting the abdomen point and the middle points of the left and right hips is smaller than a preset scoliosis threshold (such as 175°), the electronic device can determine that the user has scoliosis.
  • a preset scoliosis threshold such as 175°
  • the electronic device can judge whether the user is hunched according to the key points of the user's human body on the side view. Specifically, the electronic device may identify the user's neck point and back point according to the user's side view. The electronic device can determine whether the included angle between the line between the neck point and the back point and the straight line in the vertical direction is greater than a preset hunchback threshold (such as 10°, etc.). If the angle between the line between the neck point and the back point and the straight line in the vertical direction is greater than the hunchback threshold, the electronic device may determine that the user is hunchbacked.
  • a preset hunchback threshold such as 10°, etc.
  • each part of the human body is not a precise point.
  • the electronic device detects the image containing the human body 1 by using the key point detection model of the human body, and obtains the left shoulder point and the right shoulder point of the human body 1 .
  • the left shoulder point may represent the left shoulder of the human body 1 .
  • the right shoulder point may represent the right shoulder of the human body 1 .
  • Any point in area 1 of human body 1 on the image can be used to represent the left shoulder point, and any point in area 2 can be used to represent the right shoulder point. That is, the position of the left shoulder point recognized by the electronic device may fluctuate in area 1, and the position of the right shoulder point may fluctuate in area 2.
  • FIG. 3A the positions of the left shoulder point and the right shoulder point of the human body 1 recognized by the electronic device are shown in FIG. 3A .
  • the left shoulder point is at the same height as the right shoulder point. Then the electronic device can determine that the human body 1 has no high or low shoulders.
  • the electronic device recognizes the positions of the left shoulder point and the right shoulder point of the human body 1 as shown in FIG. 3B .
  • the height of the left shoulder point is different from that of the right shoulder point.
  • the right shoulder point is higher than the left shoulder point, and the angle between the line connecting the right shoulder point and the left shoulder point and the horizontal line is ⁇ 1. Then the electronic device can determine that the human body 1 has high and low shoulders, and the right shoulder is higher than the left shoulder.
  • the electronic device may obtain different results when evaluating the body posture of the same user by using the key points of the human body. It is possible that the user has a posture problem, but since the positions of the key points of the human body fluctuate, the electronic device may determine that the user has no posture problem according to the key points of the human body. It is possible that the user does not have a posture problem, but because the positions of the key points of the human body fluctuate, the electronic device may determine that the user has a posture problem based on the key points of the human body.
  • the errors caused by the above posture judgment methods are difficult to help users accurately understand their own posture and correct posture problems in time.
  • the present application provides an image-based gesture judgment method.
  • the electronic device can collect images of the user, and determine the key points and contours of the user's human body according to the images.
  • the electronic device can combine the key points of the human body and the outline of the human body to determine whether the posture of the user is a preset posture. Compared with judging the posture only through the key points of the human body, the above method can improve the accuracy of judging the posture.
  • the aforementioned preset poses may be used to evaluate poses for specified body poses.
  • the electronic device can determine the posture of the user according to the key points of the human body and the edge points on the contour of the human body, and guide the user to complete the action corresponding to the above posture evaluation posture according to the difference between the user's posture and the posture evaluation posture.
  • the electronic device can perform image recognition on the image containing the user maintaining the posture assessment posture mentioned above, and obtain the key points and outline of the human body where the user maintains the posture assessment posture.
  • the electronic device can use the key points of the user's body and the outline of the human body to determine whether the user's posture is a posture with a posture problem, thereby realizing the posture assessment of the user.
  • the position of key points of the human body is floating, the position of each edge point on the contour of the human body is uniquely determined.
  • the user's body contour can reflect the user's posture. If the user has a posture problem, the posture of the user is usually different from that of a human body with a normal posture. Then, the above-mentioned method of posture judgment combined with the key points of the human body and the contour of the human body can improve the accuracy of the body posture evaluation result. Thereby helping users better understand their own posture and correct posture problems in time.
  • the image-based posture judgment method includes a posture assessment method.
  • the implementation process of the posture assessment method will be described in detail in subsequent embodiments of the present application.
  • FIG. 4 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and a sensor module 1 80, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural network processor). -network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices.
  • the charging management module 140 is configured to receive a charging input from a charger. While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), short-range wireless communication technology (near Field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (liquid crystal display, LCD), organic light-emitting diode (organic light-emitting diode, OLED), active matrix organic light-emitting diode or active-matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dots Light-emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as: image recognition, human key point detection, portrait segmentation, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • an external memory card such as a Micro SD card
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the earphone interface 170D is used for connecting wired earphones.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the above-mentioned electronic device 100 may be a mobile phone, a television, a tablet computer, a notebook computer, a desktop computer, a westbound computer, a handheld computer, a personal digital assistant (personal digital assistant, PDA), an artificial intelligence (artificial intelligence, AI) device and other electronic devices.
  • PDA personal digital assistant
  • AI artificial intelligence
  • the embodiment of the present application does not specifically limit the specific type of the electronic device.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 5 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, and so on.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompting text information in the status bar issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the electronic device assesses the user's posture by collecting images, usually requiring the user to maintain a natural standing posture with legs close together.
  • the user's posture problem can be better reflected. For example, if the user deliberately tilts his shoulders or bends his legs intentionally, it is difficult for the electronic device to determine the user's real posture through the collected images.
  • the electronic device may first guide the user to complete the actions corresponding to the preset posture assessment posture, and collect posture assessment images when the user maintains the posture assessment posture.
  • the electronic device can use the above posture assessment image to perform posture assessment on the user.
  • FIG. 6A schematically shows a flow chart of a method for collecting body posture assessment images by an electronic device provided in an embodiment of the present application.
  • the method may include steps S610-S640. in:
  • S610 Collect an image, and identify key points and contours of the user's human body in the image.
  • the electronic device 100 may receive a user operation for posture assessment.
  • the electronic device 100 may collect an image, and recognize key points and contours of the user's human body in the image.
  • the electronic device 100 may identify the user performing posture assessment in the image.
  • the electronic device 100 can use the portrait detection model to identify the portrait contained in the image, and use the portrait selection frame to determine the area of the portrait in the image.
  • the portrait selection frame may be an area in the image that contains a portrait and has the smallest area.
  • the shape of the portrait selection frame may include shapes such as rectangle and circle.
  • the embodiment of the present application does not limit the shape of the portrait selection frame.
  • the aforementioned portrait detection model may be a neural network-based model. Input an image into the portrait detection model, and the portrait detection model can output an image containing the above-mentioned portrait selection box.
  • the embodiment of the present application does not limit the training method of the above-mentioned portrait detection model.
  • the portrait detection model can determine the areas of the multiple portraits in the image, and use different portrait selection frames to identify different portraits.
  • the electronic device 100 may select a portrait identified by a portrait selection frame located in the center of the image for posture assessment.
  • the electronic device 100 may ask the user which portrait in the image to perform posture assessment on.
  • the electronic device 100 may perform posture assessment on the portrait selected by the user in the image.
  • the electronic device 100 may also use other methods to determine the area in the image where the user performing posture assessment is located. Determining the area where the user performing the posture assessment in the image can eliminate disturbances in the image (such as background debris, passers-by, etc.), reduce the impact of the disturbances on the posture assessment, and improve the accuracy of the posture assessment result.
  • Determining the area where the user performing the posture assessment in the image can eliminate disturbances in the image (such as background debris, passers-by, etc.), reduce the impact of the disturbances on the posture assessment, and improve the accuracy of the posture assessment result.
  • the electronic device 100 may perform key point recognition of the human body to determine the key points of the user's human body. Wherein, the electronic device 100 may only perform human body key point recognition on the area where the user performing posture assessment in the image is located (that is, the area determined by the portrait selection frame in the image). This can reduce the complexity of calculations in the process of identifying key points of the human body and improve the efficiency of calculations.
  • the electronic device 100 may identify the key points of the human body by using the key point detection model of the human body. For the key point detection model of the human body, reference may be made to the introduction of the foregoing embodiments. The embodiment of the present application does not limit the method for identifying the key points of the human body by the electronic device 100 .
  • the electronic device 100 can obtain the position of each key point of the user's human body in the image.
  • the position of the key point of the human body in the image may specifically be the position of the key point of the human body in the pixel coordinate system of the image.
  • the pixel coordinate system of the image is described by taking the shape of the image as a rectangle.
  • the pixel coordinate system of the image can be a two-dimensional Cartesian coordinate system.
  • the origin of the pixel coordinate system of the image can be located at any one of the four vertices of the image, and the x-axis and y-axis are respectively parallel to the two sides of the image plane.
  • the unit of the axis in the pixel coordinate system of the image is the pixel.
  • the point (1,1) in the pixel coordinate system may represent the pixel in the first row and first column in the image.
  • the electronic device 100 may use the human body key point detection model to perform human body key point detection on a collected image to obtain a set of coordinates.
  • This group of coordinates may be the coordinates of all key points of the user's human body (the key points of the human body shown in FIG. 2 ) arranged in a specified order.
  • This group of coordinates may be the coordinates of the key points of the human body in the pixel coordinate system.
  • the first coordinate in this set of coordinates is the coordinate of the head point in the pixel coordinate system
  • the second coordinate is the coordinate of the neck point in the pixel coordinate system.
  • the embodiment of the present application does not limit the arrangement sequence of the key points of the human body to which the coordinates belong in this group of coordinates.
  • the electronic device 100 may also perform portrait segmentation to identify the outline of the user's body.
  • the electronic device 100 may use a portrait segmentation model to perform portrait segmentation.
  • the aforementioned portrait segmentation model may be a neural network-based model.
  • the pixel values of the area where the portrait is located are all 255, and the pixel values of the area where the non-portrait is located are all 0.
  • the pixel values of the area where the portrait is located are all 0, and the pixel values of the area where the non-portrait is located are all 255.
  • the embodiment of the present application does not limit the training method of the portrait segmentation model.
  • the electronic device 100 may perform portrait segmentation only on the region in the image where the user performing posture assessment is located (that is, the region determined by the portrait selection frame in the image). Specifically, the electronic device 100 may use the portrait segmentation model to perform portrait segmentation on the region determined by the above portrait selection frame in the image, and obtain binarized pixel values of the region. The electronic device 100 may determine the pixel value of the area outside the portrait selection frame in the image as the pixel value (such as 0) of the area where the human body is not located. This can reduce the complexity of calculations in the process of portrait segmentation and improve the efficiency of calculations.
  • the electronic device 100 can change the collected original image into a binarized image.
  • the binarized image all pixel values in the area where the portrait is located are the same, and all pixel values in the area where the non-portrait is located are the same. However, the pixel values of the area where the portrait is located are different from the pixel values of the area where the non-portrait is located. Then, the position where the pixel value changes in the binarized image is the edge point of the human body. All the edge points of the human body in the image can form the contour of the human body.
  • FIG. 6C shows pixel values of a binarized image partial region.
  • the coordinate system shown in FIG. 6C may represent the pixel coordinate system of the image.
  • an area with a pixel value of 0 may represent an area where a non-portrait is located.
  • An area with a pixel value of 255 may represent the area where the portrait is located. From coordinate (1,1) to coordinate (2,1), the pixel value changes from 0 to 255.
  • the electronic device 100 may determine that the position between the coordinates (1,1) and the coordinates (2,1) is the boundary between the area where the non-portrait is located and the area where the portrait is located.
  • the electronic device 100 may determine that the position between coordinates (2,2) and coordinates (3,2), and the position between coordinates (3,3) and coordinates (4,3) are the boundaries between the non-portrait area and the portrait area.
  • the electronic device 100 may determine the pixels belonging to the area where the portrait is located at the above boundary as the edge points of the human body.
  • the above (2,1), (3,2), and (4,3) are all edge points. All the edge points of the human body in the image are connected sequentially to form the human body contour.
  • the electronic device 100 may also determine, as the edge points of the human body, the pixels at the boundary that belong to the area where the non-portrait is located.
  • the aforementioned (1,1), (2,2), and (3,3) are all edge points.
  • the electronic device 100 may obtain a data set after the aforementioned portrait segmentation.
  • This data set contains pixel values corresponding to pixels at various positions in an image.
  • This one data set can be used to generate the above binarized image.
  • the aforementioned human portrait detection model, human body key point detection model, and human portrait segmentation model may be one model. That is to say, an image is input into a model, which can first detect the area where the portrait is located in the image, and then perform human body key point detection and portrait segmentation, and output the position of the human body key point in the image and the binarized image (or the data set corresponding to the binarized image).
  • the above step of determining the user performing posture assessment in the image is optional. That is to say, the electronic device 100 may not use the portrait detection model to determine the user performing posture assessment in the image. Wherein, the electronic device 100 obtains an image, and can directly perform human body key point recognition and portrait segmentation on this image.
  • the posture assessment gesture mentioned above may refer to a posture that facilitates the electronic device 100 to perform posture assessment on the user.
  • the detection of some posture problems requires a front view of the user.
  • the aforementioned preset posture assessment posture may be: facing the camera, keeping a certain distance between the arms and the sides of the body, and keeping the legs naturally upright and close together.
  • the aforementioned preset posture assessment posture can also be: facing the camera, with both arms straight, keeping a certain distance between the arms and the sides of the body, and the legs are naturally upright and close together.
  • the detection of some posture problems requires a side view of the user.
  • the aforementioned preset body posture assessment posture may be: the side of the body faces the camera and stands naturally. It is not limited to the posture evaluation postures listed above, and the posture evaluation postures may also be other.
  • the natural uprightness of both legs may mean that the knee joints of both knees are naturally straightened, and the angle between the calf and thigh of one of the two legs is smaller than a preset angle threshold.
  • the above-mentioned natural standing can mean: the legs are naturally upright, the torso is straight, and a relaxed state is maintained.
  • the posture assessment image used for posture assessment may include a front view and a side view of the user.
  • the implementation method of collecting the user's front view and using the front view to perform posture assessment is described first, and then the implementation method of using the user's side view to perform posture assessment is described.
  • the electronic device 100 may also use the binarized image and key points of the human body to determine whether the user's posture is a preset posture evaluation posture, not limited to determining the human body contour from the above-mentioned binarized image.
  • the user's posture is determined by using the above-mentioned human body outline and key points of the human body as an example for illustration.
  • the electronic device 100 may judge whether the arms are straight, whether the arms are kept at a certain distance from the sides of the body, whether the legs are upright, and whether the legs are close together.
  • the method for judging whether the user's posture is a posture evaluation posture is specifically introduced below in combination with the key points of the human body and the contour of the human body shown in FIG. 6D . in:
  • the electronic device 100 may determine whether the user's arms are straight according to the key points of the human body on the user's arms. The following is an example of judging whether the right arm is straightened.
  • the electronic device 100 can determine whether the user's right arm is straight by determining whether the right shoulder point A1 , the right elbow point A2 , and the right wrist point A3 are on a straight line. Wherein, the electronic device 100 may calculate the line segment A1A2 between the right shoulder point and the right elbow point, the line segment A2A3 between the right elbow point and the right wrist point, and the line segment A1A3 between the right shoulder point and the right wrist point.
  • the electronic device 100 can determine whether the value of Length(A1A2)+Length(A2A3) ⁇ Length(A1A3) is smaller than the preset threshold W1.
  • the value of the above-mentioned threshold W1 may be, for example, 5 pixels, 6 pixels, and so on. The embodiment of the present application does not limit the value of the threshold W1. If Length(A1A2)+Length(A2A3) ⁇ Length(A1A3) is smaller than the threshold W1, the electronic device 100 may determine that the right arm is straight. If Length(A1A2)+Length(A2A3) ⁇ Length(A1A3) is greater than or equal to the threshold W1, the electronic device 100 may prompt the user to straighten the arm.
  • the electronic device 100 may determine whether the difference between the angle formed by the right elbow point A2, the right shoulder point A1, the right elbow point A2, and the right wrist point A3, that is, ⁇ A1A2A3 (hereinafter referred to as ⁇ A2) and 180° is smaller than the preset threshold W2.
  • the above-mentioned threshold W2 may be, for example, 5°, 6° and so on.
  • the embodiment of the present application does not limit the value of the threshold W2.
  • the size of ⁇ A2 may be ⁇ 1. If the difference between ⁇ A2 and 180° is smaller than the threshold W2, the electronic device 100 may determine that the right arm is straight.
  • the electronic device 100 may prompt the user to straighten the arm. Not limited to the methods listed above, the electronic device 100 may also use other methods to determine whether the user's right arm is straight.
  • the electronic device 100 may also determine whether the user's right arm is straight according to the curve of the right arm on the outline of the human body. For example, the electronic device 100 may determine the curve of the right arm on the contour of the human body according to the right shoulder point, the right elbow point and the right wrist point. The electronic device 100 may calculate the curvature of the above-mentioned curve of the right arm. If the maximum curvature of the curve of the right arm is smaller than the preset curvature threshold, the electronic device 100 may determine that the user's right arm is straightened. Otherwise, the electronic device 100 may determine that the user's right arm is not straightened.
  • the method for judging whether the left arm is straight can be the same as the method for judging whether the right arm is straight.
  • the present application does not further describe the method for judging whether the left arm is straight.
  • the electronic device 100 may determine whether the arms are at a certain distance from the sides of the body according to the angle between the straight line where the arms are located and the midline of the torso. The following is an example of judging whether the right arm maintains a certain distance from the right side of the body.
  • the straight line where the right arm is located may be the straight line A3A4 where the neck point A4 and the right wrist point A3 are located.
  • the median line of the torso may be the straight line A4A5 where the neck point A4 and the middle point A5 of the left and right hips are located.
  • the angle between the straight line A3A4 and the straight line A4A5 is ⁇ 2.
  • the electronic device 100 may determine whether ⁇ 2 is within the preset threshold range W3.
  • the aforementioned threshold range W3 may be, for example, [30°, 50°], [35°, 50°] and so on.
  • the embodiment of the present application does not limit the value of the threshold range W3.
  • the electronic device 100 may determine that the right arm keeps a certain distance from the right side of the body. If ⁇ 2 is not within the preset threshold range W3, and ⁇ 2 is smaller than the minimum value of the threshold range W3, the electronic device 100 may prompt the user to raise the right arm. If ⁇ 2 is not within the preset threshold range W3, and ⁇ 2 is greater than the minimum value of the threshold range W3, the electronic device 100 may prompt the user to lower the arm.
  • the electronic device 100 may only determine whether ⁇ 2 is greater than a preset included angle (such as 30°, 35°, etc.). If ⁇ 2 is greater than the preset included angle, the electronic device 100 may determine that the right arm keeps a certain distance from both sides of the body. Otherwise, the electronic device 100 may prompt the user to raise the right arm.
  • a preset included angle such as 30°, 35°, etc.
  • the straight line where the right arm is located may be the straight line A1A3 where the right shoulder point A1 and the right wrist point A3 are located.
  • the electronic device 100 can use the straight line A1A3 and the straight line A4A5 to determine whether the right arm keeps a certain distance from the right side of the body.
  • the method for judging whether the left arm keeps a certain distance from the left side of the body may be the same as the method for judging whether the right arm keeps a certain distance from the right side of the body.
  • the method of whether the left arm maintains a certain distance from the left side of the body will not be further described.
  • the electronic device 100 may determine whether the user's arms keep a certain distance from both sides of the body according to the key points of the human body on the user's arms and the edge points on the outline of the human body.
  • the electronic device 100 may calculate the number of intersections between the straight line A3A12 where the right wrist point A3 and the left wrist point A12 are located and the contour of the human body.
  • the aforementioned intersection points are also edge points on the contour of the human body. If the user's arms are kept at a certain distance from the sides of the body, there are six intersection points between the straight line A3A12 and the contour of the human body. These 6 intersection points include: two edge points on the contour of the left arm, two edge points on the contour of the waist, and two edge points on the contour of the right arm.
  • the electronic device 100 may determine whether there are six intersection points. If there are six intersection points, the electronic device 100 may determine that the user's arms are kept at a certain distance from both sides of the body. If the number of the foregoing intersection points is less than 6, the electronic device 100 may prompt the user to raise the arm.
  • the electronic device 100 In the posture assessment posture, keeping the arms at a certain distance from the sides of the body enables the electronic device 100 to separate the limbs and torso of the human body during portrait segmentation, and obtain edge points on both sides of the waist, so as to perform better posture assessment.
  • the electronic device 100 may determine whether the user's legs are straight according to key points on the user's legs. The following is an example of judging whether the right leg is straight.
  • the electronic device 100 can calculate the line segment A6A7 between the right hip point A6 and the right knee point A7, the line segment A7A8 between the right knee point A7 and the right ankle point A8, and the line segment A6A8 between the right hip point A6 and the right ankle point A8.
  • the electronic device 100 can determine whether the difference between the length of the line segment A6A7 (Length(A6A7)) and the length of the line segment A7A8 (Length(A7A8)) minus the length of the line segment A6A8 (Length(A6A8)) (that is, Length(A6A7+Length(A7A8)-Length(A6A8)) is less than the preset threshold W4.
  • the value of the threshold W4 can be For example, 5 pixels, 6 pixels, etc. The embodiment of the present application does not limit the value of the threshold W4.
  • Length (A6A7+Length (A7A8)-Length (A6A8) is less than the threshold W4
  • the electronic device 100 can determine that the user's legs are upright. If Length (A6A7+Length (A7A8)-Length (A6A8) is greater than the threshold W4, the electronic device 100 can prompt the user to straighten the legs stand.
  • the method for judging whether the left leg is upright may be the same as the method for judging whether the right leg is upright.
  • the present application does not further describe the method for judging whether the left leg is upright.
  • the electronic device 100 may determine whether the user's legs are close together according to the key points of the human body of the user's legs and the edge points on the outline of the human body.
  • the electronic device 100 may determine the intersection point of the straight line A7A9 where the right knee point A7 and the left knee point A9 are located and the inner sides of the legs on the contour of the human body. For example, the intersection point of the straight line A7A9 with the inner side of the left leg on the human body contour is B1, and the intersection point with the inner side of the right leg on the human body contour is B2.
  • the electronic device 100 may determine the intersection point of the straight line A8A10 where the right ankle point A8 and the left ankle point A10 are located and the inner sides of the legs on the contour of the human body.
  • intersection point of the straight line A8A10 with the inner side of the left leg on the human body contour is B3, and the intersection point with the inner side of the right leg on the human body contour is B4.
  • the aforementioned intersection points B1, B2, B3, and B4 are all edge points on the outline of the human body.
  • the electronic device 100 can determine the length of the line segment B1B2 between the intersection points B1 and B2 and the length of the line segment B3B4 between the intersection points B3 and B4. If the length of at least one of the line segment B1B2 and the line segment B3B4 is less than the preset threshold W5, the electronic device 100 may determine that the user's legs are close together.
  • the aforementioned threshold W5 may be, for example, 5 pixels, 6 pixels, or the like. The embodiment of the present application does not limit the value of the threshold W5. If the lengths of the line segment B1B2 and the line segment B3B4 are both greater than or equal to the threshold W5, the electronic device 100 may prompt the user to put the legs together.
  • the adjoining of both knees may include touching of knees (or inner ankles of both feet).
  • the electronic device 100 may not have an edge point at the position of the inner knee joint of both legs in the human body contour obtained through the portrait segmentation. That is, there is no intersection point between the above-mentioned straight line A7A9 and the inner sides of both legs on the outline of the human body.
  • the inner ankles of both feet touch each other, there is no point of intersection between the above-mentioned straight line A8A10 and the inner sides of the legs on the outline of the human body.
  • the electronic device 100 may determine that the user's legs are close together.
  • the electronic device 100 may also determine whether the captured image is a front view of the user. For example, the electronic device 100 may determine whether the user is facing the camera. In a possible implementation manner, the electronic device 100 may determine whether the user is facing the camera by detecting a face area in the image. If the user's face for body posture assessment in the image includes key facial points such as left eye, right eye, and lips, the electronic device can determine that the user is facing the camera. Otherwise, the electronic device 100 may prompt the user to adjust the standing orientation, and guide the user to stand facing the camera. For example, when the face area includes the left eye but not the right eye, the electronic device 100 may prompt the user to turn left.
  • the electronic device 100 may also determine whether the distance between the left eye and the right eye on the face of the user performing posture assessment in the image is smaller than the eye distance threshold. If the distance between the left eye and the right eye is smaller than the eye distance threshold, the electronic device 100 may determine that the user is not standing facing the camera. The electronic device 100 may prompt the user to adjust the orientation of standing, and guide the user to stand facing the camera. Optionally, the electronic device 100 may also determine whether the user is standing facing the camera according to the user's left shoulder point and right shoulder point. For example, the electronic device 100 may determine whether the distance between the left shoulder point and the right shoulder point is smaller than the shoulder width threshold.
  • the electronic device 100 may determine that the user is not standing facing the camera, and then prompt the user to adjust the standing direction, and guide the user to stand facing the camera.
  • the embodiment of the present application does not limit the method for judging whether the user is facing the camera.
  • the electronic device 100 may only determine whether the user's arms are kept at a certain distance from the sides of the body, whether the legs are upright, and whether the legs are close together.
  • the electronic device 100 can determine whether the user's posture satisfies facing the camera, whether the arms are kept at a certain distance from the sides of the body, and whether the legs are upright and close together.
  • the posture judging method provided in the embodiment of the present application may not be limited to only judging whether the user's posture is a preset body posture evaluation posture.
  • the electronic device 100 may use the key points of the human body and the outline of the human body to only determine whether the user's arms are kept at a certain distance from the sides of the body.
  • the electronic device 100 may only judge whether the user's legs are upright and close together by using the key points of the human body and the outline of the human body.
  • the posture judgment method combined with the key points of the human body and the contour of the human body can improve the accuracy of posture judgment.
  • the posture of the user is a preset posture assessment posture
  • the electronic device 100 may prompt the user to maintain the posture assessment posture, and use the camera to collect images.
  • the image collected after the user is prompted to maintain the posture assessment posture may be a posture assessment image.
  • the electronic device 100 may use the image determined to be the posture evaluation posture of the user as the body posture evaluation image. In this way, after it is determined that the user's posture is the preset posture evaluation posture, the electronic device 100 does not need to collect images again.
  • FIG. 7A to 7D exemplarily show schematic diagrams of scenarios where the electronic device 100 prompts the user to adjust posture.
  • the electronic device 100 may display the user interface 710 shown in FIG. 7A .
  • the user interface 710 may include a display area 711 and a display area 712 . in:
  • Display area 711 may be used to display an example diagram 711A of posture assessment poses, as well as action prompts 711B.
  • the content of the action prompt 711B may be: please straighten your arms, hang your arms down to keep a certain distance from your body sides, and keep your legs together. In this way, the user can adjust his posture according to the example diagram 711A and the action prompt 711B, so as to achieve the preset body posture evaluation posture.
  • the display area 712 can be used to display the image collected by the electronic device 100 through the camera 193 .
  • the image may include the user undergoing a posture assessment.
  • the electronic device 100 may determine the difference between the user's posture and the posture evaluation posture according to the method in step S620 above, and display the action prompt 711B in the display area 711 . Wherein, the electronic device 100 may also broadcast the content in the action prompt 711B by voice, so as to guide the user to adjust the posture to a posture evaluation posture.
  • the electronic device 100 may continuously collect images of the user, and determine whether the user's gesture is a posture assessment gesture. Wherein, the electronic device 100 may collect images of the user every preset time (such as 1 second, 2 seconds, etc.). If it is judged that the user's posture is not a posture evaluation posture by using the image collected for the first time, the electronic device may collect the user's image again (that is, the second time) and determine whether the user's posture is a posture evaluation posture. That is to say, the electronic device 100 may execute step S610, step S620, and step S640 in a loop until it is determined that the user's gesture is a posture evaluation gesture.
  • the electronic device 100 may execute step S610, step S620, and step S640 in a loop until it is determined that the user's gesture is a posture evaluation gesture.
  • the user adjusts the posture according to the action prompt 711B shown in FIG. 7A .
  • the electronic device 100 may display the user interface 710 shown in FIG. 7B. It can be seen from the display area 712 shown in FIG. 7B that the user has adjusted the posture from the bent arms shown in FIG. 7A to straightened arms, and put the arms close to the sides of the body.
  • the electronic device 100 determines according to the method in step S620 above that the user's arms are not kept at a certain distance from the sides of the body, and the legs are not close together. That is, the pose of the user is still not evaluating the pose for the pose.
  • the electronic device 100 may display an action prompt 711C shown in FIG. 7B .
  • the content of the action prompt 711C may be: please keep a certain distance between the arms on the stage and the sides of the body, and keep the legs together.
  • the electronic device 100 may also announce the content of the action prompt 711C by voice.
  • the electronic device 100 may display the user interface 710 shown in FIG. 7C. It can be seen from the display area 712 shown in FIG. 7C that the user has adjusted the posture from the arms close to the sides of the body as shown in FIG. 7B to straight arms and a certain distance between the arms and the body. But the user's legs are still not together. That is, the pose of the user is still not evaluating the pose for the pose.
  • the electronic device 100 may display an action prompt 711D shown in FIG. 7C .
  • the content of the action prompt 711D may be: please put your legs together.
  • the electronic device 100 can also play the content of the motion prompt 711D by voice.
  • the user adjusts the posture according to the action prompt 711D.
  • the electronic device 100 may display the user interface 710 shown in FIG. 7D. It can be seen from the display area 712 shown in FIG. 7D that the user has adjusted the posture from the open legs shown in FIG. 7C to the legs close together.
  • the electronic device 100 determines that the user's posture is a posture evaluation posture according to the method in step S620 above.
  • the electronic device 100 may display an action prompt 711E shown in FIG. 7D to prompt the user to maintain a posture assessment posture.
  • the content of the action prompt 711E may be: please keep this posture, and a photo is about to be taken.
  • the electronic device 100 may also announce the content of the action prompt 711E by voice. After the user is prompted to maintain the posture assessment posture, the electronic device 100 may collect an image and use the collected image as a posture assessment image.
  • the content of the electronic device 100 prompting the user to adjust the posture may vary with the change of the posture of the user.
  • the electronic device 100 may prompt the user for the difference between the user's posture and the posture evaluation posture. This allows the user to understand which part of the user is different from the posture assessment posture, so as to better guide the user to complete the action corresponding to the posture assessment posture.
  • the electronic device 100 may also only display the display area 711 on the display screen. That is, the electronic device 100 may not display the image of the user on the display screen.
  • the posture assessment image above may also be collected by other electronic devices, such as the electronic device 200.
  • a communication connection is established between the electronic device 200 and the electronic device 100 .
  • the electronic device 200 may send the posture assessment image to the electronic device 100 .
  • the electronic device 100 can perform posture assessment using the posture assessment image.
  • the electronic device 100 may determine one or more edge points on the outline of the human body by using the key points of the human body.
  • the one or more edge points can be used to judge whether the posture of the user is a preset posture assessment posture, and can also be used to perform posture assessment.
  • the electronic device 100 can obtain the coordinates of each human body key point of the user in the image in the pixel coordinate system.
  • This data set can contain pixel values corresponding to pixels at various positions on the image in the pixel coordinate system.
  • the pixel values of the regions where the portrait of the user is located in the image are all the same (for example, 255).
  • the pixel values of regions in the image where the portrait of the user does not exist are all the same (for example, 0).
  • the pixel values of the region where the portrait of the user is located are different from the pixel values of the region where the portrait of the user is not present.
  • the electronic device 100 may determine all edge points on the contour of the human body according to the above data set. For a specific method, reference may be made to the introduction of the foregoing embodiment shown in FIG. 6C .
  • the electronic device 100 may determine one or more edge points on the outline of the human body by using the straight line where the key points of the human body are located. Take the key point A of the human body as an example for illustration.
  • the key point A may be any one of the key points shown in FIG. 2 above.
  • the electronic device 100 determines the expression of the straight line L1 where the key point A of the human body is located in the pixel coordinate system according to the position of the key point A of the human body in the pixel coordinate system.
  • the electronic device 100 can determine which pixels on the straight line L1 are edge points on the contour of the human body. In this way, the electronic device 100 can obtain the position of the intersection point of the straight line L1 and the human body contour in the pixel coordinate system.
  • the aforementioned intersection point is the edge point determined by the electronic device 100 on the outline of the human body by using the key point A of the human body.
  • the electronic device 100 may determine the expression of the straight line L1 where the key point A of the human body is located in the pixel coordinate system according to the position of the key point A of the human body in the pixel coordinate system. Further, the electronic device 100 may determine which pixels on the straight line L1 are the pixels where the pixel values in the data set change (for example, change from 0 to 255, and change from 255 to 0). The electronic device 100 may determine the pixel on the straight line L1 where the pixel value in the data set changes as an edge point.
  • the above-mentioned edge points are equivalent to the intersection points of the straight line L1 and the outline of the human body, that is, the edge points determined by the key point A of the human body on the outline of the human body. In this way, the electronic device 100 can obtain the position in the pixel coordinate system of the edge point determined by the key point A of the human body on the outline of the human body.
  • the embodiment of the present application does not limit the implementation method for the electronic device to use the key points of the human body to determine the edge points on the outline of the human body.
  • the position of the intersection of the line where the key points of the human body are located and the outline of the human body can be determined by the above implementation method. This will not be described in detail in subsequent embodiments.
  • the implementation method of the electronic device 100 evaluating whether the user has high and low shoulders, X-shaped legs, O-shaped legs, XO-shaped legs, and scoliosis by using the body posture evaluation image will be specifically introduced below.
  • the electronic device 100 may determine whether the user has high or low shoulders by using the edge points determined by the left shoulder point and the right shoulder point on the contour of the human body.
  • the electronic device 100 may determine a straight line L2 passing through the right shoulder point and perpendicular to the x-axis of the pixel coordinate system.
  • the intersection of the straight line L2 and the right shoulder on the contour of the human body is C1.
  • the above intersection point C1 is an edge point on the straight line L2 whose y value in the pixel coordinate system is greater than that of the right shoulder point and which is closest to the right shoulder point.
  • the electronic device 100 may determine a straight line L3 passing through the left shoulder point and perpendicular to the x-axis of the pixel coordinate system.
  • the intersection point of the straight line L3 and the left shoulder on the contour of the human body is C2.
  • the above intersection point C2 is an edge point on the straight line L1 whose y value in the pixel coordinate system is greater than the y value of the left shoulder point and which is closest to the left shoulder point.
  • the electronic device 100 may determine whether the angle between the straight line C1C2 where the intersection points C1 and C2 are located and a horizontal straight line (ie, a straight line perpendicular to the y-axis of the pixel coordinate system) is smaller than a preset threshold W6.
  • the aforementioned threshold W6 may be, for example, 5°, 6°, and so on.
  • the embodiment of the present application does not limit the value of the threshold W6.
  • the angle between the straight line C1C2 and the straight line in the horizontal direction is ⁇ 2.
  • ⁇ 2 arctan[(Yc2-Yc1)/(Xc2-Xc1)]. If the absolute value of ⁇ 2 is smaller than the threshold W6, the electronic device 100 may determine that the user has no high and low shoulders. If the absolute value of ⁇ 2 is greater than or equal to the threshold W6, the electronic device 100 can determine whether the user has high or low shoulders, and can determine whether the user has high left shoulders or high right shoulders according to the positive or negative value of ⁇ 2.
  • a negative ⁇ 2 may indicate that the user's right shoulder is higher than the left shoulder.
  • a positive value of ⁇ 2 may indicate that the user's left shoulder is higher than the right shoulder.
  • the electronic device 100 may also determine the severity of the user's high and low shoulders according to the magnitude of the absolute value of ⁇ 2. The larger the absolute value of ⁇ 2 is, the more serious the user's posture problem is.
  • the electronic device 100 may compare the values on the y-axis of the positions of the intersection point C1 and the intersection point C2 in the pixel coordinate system.
  • the position of the intersection point C1 in the pixel coordinate system is (Xc1, Yc1).
  • the position of the intersection point C2 in the pixel coordinate system is (Xc2, Yc2).
  • the electronic device 100 can determine whether
  • can represent the absolute value of Yc1-Yc2.
  • the aforementioned threshold W7 may be, for example, 5 pixels, 6 pixels, or the like. The embodiment of the present application does not limit the value of the threshold W7.
  • the electronic device 100 may determine that the user has no uneven shoulders. If
  • the electronic device 100 may display the schematic diagram of the high and low shoulder evaluation results shown in FIG. 8B in the high and low shoulder evaluation results.
  • the electronic device 100 may display the intersection point C1 , intersection point C2 and straight line C1C2 on the user's human body image.
  • the straight line C1C2 may indicate the direction of the user's shoulders.
  • the electronic device 100 may also display a horizontal straight line in the user's body image. This straight line in the horizontal direction can indicate the direction of the shoulders of a human body with a normal posture (that is, without high and low shoulders).
  • the above-mentioned straight line C1C2 and the above-mentioned straight line in the horizontal direction can facilitate the user to visually check whether he has high and low shoulders and the severity of the high and low shoulders.
  • the electronic device 100 can determine the edge point on the shoulder of the human body outline according to the key points of the human body, and use the edge point to compare the direction of the user's shoulders for posture assessment with the direction of the shoulders of the normal posture of the human body, and obtain the user's high and low shoulder evaluation results. Since the position of the edge point is determined, the above method can reduce the influence of the position fluctuation of the detected key points of the human body on the high and low shoulder evaluation results, and improve the accuracy of the high and low shoulder evaluation results.
  • the electronic device 100 may determine the user's leg shape by using the edge points determined by the key points of the legs on the outline of the human body.
  • the electronic device 100 may determine the straight line L4 where the right knee point and the left knee point are located.
  • the intersection point of the straight line L4 with the inner side of the left leg on the human body contour is B1
  • the intersection point with the inner side of the right leg on the human body contour is B2.
  • the intersection point B1 is an edge point on the straight line L4 whose x value in the pixel coordinate system is smaller than the x value of the left knee point, greater than the x value of the right knee point, and closest to the left knee point.
  • the point of intersection B2 is the edge point on the straight line L4 whose x value is smaller than the x value of the left knee point and greater than the x value of the right knee point in the pixel coordinate system, and is closest to the right knee point.
  • the electronic device 100 can determine the straight line L5 where the right ankle point and the left ankle point are located.
  • the intersection point of the straight line L5 with the inner side of the left leg on the human body contour is B3, and the intersection point with the inner side of the right leg on the human body contour is B4.
  • the intersection point B3 is an edge point on the straight line L5 whose x value is smaller than the x value of the left ankle point and greater than the x value of the right ankle point in the pixel coordinate system, and which is closest to the left ankle point.
  • the intersection point B4 is an edge point on the straight line L5 whose x value is smaller than the x value of the left ankle point and greater than the x value of the right ankle point in the pixel coordinate system, and is closest to
  • the electronic device 100 may calculate the length len(B1B2) of the line segment B1B2 between the intersection point B1 and the intersection point B2, the length len(B3B4) of the line segment B3B4 between the intersection point B3 and the intersection point B4, and the length len(B1B3) of the line segment B1B3 between the intersection point B1 and the intersection point B3.
  • the value of len(B1B2) is 0. If in the posture evaluation image, the inner ankles of the user's feet collide so that there is no intersection point between the straight line L5 and the inner sides of the legs on the outline of the human body, then the value of len(B3B4) above is 0.
  • the electronic device 100 may determine whether the value of len(B1B2)/len(B1B3) is greater than a preset threshold W8.
  • the aforementioned threshold W8 may be, for example, 0.1, 0.15, and so on. The embodiment of the present application does not limit the value of the threshold W8. If len(B1B2)/len(B1B3) is greater than the threshold W8, the electronic device 100 may determine that the user's leg type is an O-shaped leg. It can be understood that if len(B1B2)/len(B1B3) is greater than the threshold W8, it may indicate that the user's knees are separated and cannot be close together when the legs are naturally upright and close together. That is, the user's leg shape is an O-shaped leg.
  • len(B1B2)/len(B1B3) is, the more serious the user's O-shaped leg is.
  • len(B1B2)/len(B1B3) is less than or equal to the threshold W8 may indicate that the user's knees can be close together when the legs are naturally upright and close together. That is, the user's legs are not O-shaped legs.
  • the electronic device 100 may determine whether len(B3B4)/len(B1B3) is greater than the threshold W9.
  • the aforementioned threshold W9 may be, for example, 0.1, 0.15, and so on. The embodiment of the present application does not limit the value of the threshold W9. If len(B3B4)/len(B1B3) is greater than the threshold W9, the electronic device 100 may determine that the user's leg type is an X-shaped leg.
  • len(B3B4)/len(B1B3) is greater than the threshold W9, it may indicate that the inner ankles of the user's feet are separated and cannot be close together when the legs are naturally upright and close together. That is, the user's leg shape is an X-shaped leg. The larger the len(B3B4)/len(B1B3) is, the more serious the user's X-shaped legs are. The fact that len(B3B4)/len(B1B3) is less than or equal to the threshold W9 may indicate that the inner ankles of the user's feet can be close to each other when the legs are naturally upright and close together. That is, the user's leg shape is not an X-shaped leg.
  • the electronic device 100 may determine whether the user's left calf and right calf are separated and cannot be close together. Specifically, the electronic device 100 may divide the line segment between the left knee point and the left ankle point into three equal parts, and determine the third part of the line segment (third part B5 and third part B7 as shown in FIG. 8C ).
  • the electronic device 100 may divide the line segment between the right knee point and the right ankle point into thirds, and determine the thirds in the line segment (the thirds B6 and the thirds B8 shown in FIG. 8C ).
  • the electronic device 100 may determine the straight line L6 where the third point B5 and the third point B7 are located, and the straight line L7 where the third point B6 and the third point B8 are located.
  • the electronic device 100 may determine, between the straight line L6 and the straight line L7, the edge point group A on the inner side of the left leg on the human body contour, and the edge point group B on the inner side of the right leg on the human body contour. Both the above edge point group A and the edge point group B may contain a plurality of edge points.
  • the electronic device 100 may select one or more pairs of edge points with the same y value in the pixel coordinate system from the edge point group A and the edge point group B. Among them, a pair of edge points includes an edge point from edge point group A and an edge point from edge point group B. Two edge points in a pair of edge points have the same y value in the pixel coordinate system.
  • the electronic device 100 can calculate the distance between two edge points in a pair of edge points, and the distance can be recorded as len(a pair of edge points).
  • the electronic device 100 may determine whether len(a pair of edge points)/len(B1B3) is greater than a preset threshold W10.
  • the aforementioned threshold W10 may be, for example, 0.1, 0.2, and so on.
  • the embodiment of the present application does not limit the value of the threshold W10. If the above-mentioned multiple pairs of edge points satisfy the requirement that len(a pair of edge points)/len(B1B3) is greater than the threshold W10, the electronic device 100 may determine that the user's left calf and right calf are separated and cannot be approached.
  • the user's leg type is an XO type leg. Otherwise, the electronic device 100 may determine that the user's leg shape is a normal leg shape. Wherein, the greater the average value obtained from len(a pair of edge points)/len(B1B3) of the above-mentioned pairs of edge points, the more serious the degree of the user's XO-shaped legs.
  • the electronic device 100 can also divide the line segment between the left knee point and the left ankle point, and the line segment between the right knee point and the right ankle point into more equal parts (such as four equal parts, five equal parts) to select edge points on the contour of the human body, so as to realize whether the user's left calf or right calf is separated and cannot be close.
  • the electronic device 100 may further divide the line segment between the left knee point and the left ankle point, and the line segment between the right knee point and the right ankle point into two equal parts.
  • the electronic device 100 may select an edge point whose y-value in the pixel coordinate system is different from the y-value of the bisection point by a preset difference from the edge points on the inside of the legs on the human body contour, and use the selected edge point to determine whether the user's left calf or right calf is separated and cannot be approached.
  • the embodiment of the present application does not limit the method of selecting an edge point for judging whether the user's left calf or right calf is separated and cannot be approached.
  • the electronic device 100 may also select multiple pairs of edges from edge points inside the user's knees to determine whether the user's knees are close together. For example, the electronic device 100 may translate the straight line L4 by ⁇ y pixels in the positive direction of the y-axis of the pixel coordinate system. The electronic device 100 may determine the straight line obtained by translating the straight line L4 and the two edge points on the inside of the legs on the human body contour, and determine whether the ratio of the distance ratio len(B1B3) between the two edge points is greater than the threshold W8.
  • the electronic device 100 can determine that the user's leg type is an O-shaped leg.
  • the electronic device 100 may also select multiple pairs of edge points from the edge points of the inner ankles of the user's feet to determine whether the inner ankles of the user's feet are close together.
  • the above-mentioned method of selecting multiple pairs of edge points to judge the user's leg shape can reduce the error in the portrait segmentation process, the influence of factors such as the clothes worn by the user on the leg shape evaluation result, and improve the accuracy of the leg shape evaluation result.
  • the user's leg shape is judged by the ratio of the above distance to len(B1B3).
  • the above normalization processing can make the above method for evaluating leg shape applicable to different users. Generally, if the legs of two users are O-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the knees.
  • legs of two users are X-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the inner ankles of the feet. If the legs of two users are XO-shaped legs of the same degree, when the legs are naturally upright and close together, the longer the legs of the user, the greater the distance between the left calf and the right calf.
  • the electronic device 100 may first determine whether the user has X-shaped legs by determining whether len(B3B4)/len(B1B3) is greater than the threshold W9. If the user does not have X-shaped legs (that is, len(B3B4)/len(B1B3) is less than or equal to the threshold W9), the electronic device 100 may further determine whether the user has O-shaped legs by judging whether len(B1B2)/len(B1B3) is greater than the above-mentioned threshold W8. That is to say, in the process of evaluating the user's leg shape, the embodiment of the present application does not limit the order of judging X-shaped legs, O-shaped legs, and XO-shaped legs.
  • the electronic device 100 may display the schematic diagram of the leg shape evaluation result shown in FIG. 8D in the leg shape evaluation result.
  • the electronic device 100 may display key human body points such as the user's left knee point, right knee point, left ankle point, and right ankle point on the user's human body image.
  • the electronic device 100 may also display lines representing the shape of the user's legs in the user's body image.
  • the above-mentioned lines may be obtained by fitting the key points of the user's legs, or may be obtained by fitting the edge points of the legs on the contour of the human body.
  • the embodiment of the present application does not limit the display form of the schematic diagram of the leg shape evaluation result. More or fewer markers may also be included on the schematic diagram of the leg shape evaluation result.
  • the electronic device 100 can determine the distance between the edge points on the inside of the knees, the distance between the edge points on the inner ankles of the feet, and the distance between the edge points on the inside of the left calf and the right calf according to the key points of the human body and the outline of the human body. And use the above distance to compare the leg shape of the user for body posture evaluation with the leg shape of a human body with normal body posture, and obtain the leg shape evaluation result of the user. Since the positions of the edge points are determined, the distance between the edge points on the contour of the human body where the left leg and the right leg are at the same height can better reflect the user's leg shape. This can reduce the influence of the position fluctuation of the key points of the human body on the leg shape evaluation results when only the human body key points are used for leg shape evaluation, and improve the accuracy of the leg shape evaluation results.
  • the electronic device 100 may use key points on the torso to determine whether the user has scoliosis.
  • the electronic device 100 may determine a line segment L8 between the right shoulder point and the right hip point, and a line segment L9 between the left shoulder point and the left hip point.
  • the electronic device 100 can calculate the length of the line segment L8 as len(L8), and the length of the line segment L9 as len(L9).
  • the electronic device 100 may determine whether min[len(L8), len(L9)]/max[len(L8), len(L9)] is smaller than a preset threshold W11.
  • the aforementioned threshold W11 may be a value less than or equal to 1 and close to 1, for example, 0.95, 0.9 and so on.
  • the embodiment of the present application does not make other limitations on the value of the threshold W11.
  • the aforementioned min[len(L8), len(L9)] may indicate selecting the smallest value among len(L8) and len(L9).
  • the aforementioned max[len(L8), len(L9)] may indicate selecting the largest value among len(L8) and len(L9).
  • the electronic device 100 may determine that the user has scoliosis. If min[len(L8), len(L9)]/max[len(L8), len(L9)] is greater than or equal to the threshold W11, the electronic device 100 may determine that the user does not have scoliosis.
  • the electronic device 100 may determine whether the user has scoliosis by using the key points on the torso to determine the edge points on the outline of the human body.
  • the electronic device 100 may determine a straight line L10 passing through the right shoulder point and perpendicular to the x-axis of the pixel coordinate system.
  • the intersection of the straight line L10 and the right shoulder on the contour of the human body is F1.
  • the electronic device 100 may determine a straight line L11 passing through the left shoulder point and perpendicular to the x-axis of the pixel coordinate system.
  • the intersection of the straight line L11 and the left shoulder on the contour of the human body is F2.
  • the electronic device 100 may determine a straight line L12 passing through the right hip point and perpendicular to the y-axis of the pixel coordinate system.
  • the intersection point of the straight line L12 and the right waist on the contour of the human body is F3.
  • the intersection point F3 is an edge point on the straight line L12 whose x value in the pixel coordinate system is smaller than the x value of the right hip point and which is closest to the right hip point.
  • the electronic device 100 may determine a straight line L13 passing through the left hip point and perpendicular to the y-axis of the pixel coordinate system.
  • the intersection point of the straight line L13 and the left waist on the human body contour is F4.
  • the intersection point F4 is an edge point on the straight line L13 whose x value in the pixel coordinate system is greater than that of the left hip point and which is closest to the left hip point.
  • the electronic device 100 may calculate the length of the line segment F1F3 between the intersection point F1 and the intersection point F3 to obtain len(F1F3).
  • the electronic device 100 may calculate the length of the line segment F2F4 between the intersection point F2 and the intersection point F4 to obtain len(F2F4).
  • the electronic device 100 may determine whether min[len(F1F3), len(F2F4)]/max[len(F1F3), len(F2F4)] is smaller than the preset threshold W11. If min[len(F1F3), len(F2F4)]/max[len(F1F3), len(F2F4)] is smaller than the threshold W11, the electronic device 100 may determine that the user has scoliosis.
  • the electronic device 100 may determine that the user does not have scoliosis.
  • the electronic device 100 may determine the schematic diagram of scoliosis assessment results according to the key points of the human body and the outline of the human body.
  • 8G to 8I exemplarily show the process of the electronic device 100 determining the scoliosis evaluation results.
  • the electronic device 100 may divide the line segment between the neck point and the abdomen point into thirds, and determine the third equal point E1 closest to the abdomen point from the thirds of the line segment.
  • the electronic device 100 may determine a straight line L14 passing through the third point E1 and perpendicular to the y-axis of the pixel coordinate system.
  • the intersection point between the straight line L14 and the left waist on the human body contour is E2.
  • the intersection point E2 is an edge point on the straight line L14 whose x value in the pixel coordinate system is greater than the x value of the third point E1 and which is closest to the third point E1.
  • the electronic device 100 may determine a straight line L15 passing through the midpoint of the left and right hips and perpendicular to the y-axis of the pixel coordinate system.
  • the intersection of the straight line L15 and the left waist on the human body contour is E3.
  • the intersection point E3 is an edge point on the straight line L15 whose x value in the pixel coordinate system is greater than the x value of the middle points of the left and right hips, and which is closest to the middle points of the left and right hips.
  • the electronic device 100 may translate the line segment E2E3 between the intersection point E2 and the intersection point E3 to the left to a position where the intersection point E3 coincides with the midpoint of the left and right hips.
  • the position of the intersection point E2 after the above translation is the point E4 on the straight line L14.
  • the electronic device 100 can obtain the line segment L16 between the neck point and the point E4 shown in FIG. 8H , and the line segment L17 between the point E4 and the middle points of the left and right hips.
  • the electronic device 100 may use the pixels on the line segment L16 and the pixels on the line segment L17 to perform curve fitting to obtain a curve segment L18 between the neck point and the middle points of the left and right hips shown in FIG. 8I .
  • the above-mentioned curve fitting method may include least square method curve fitting, cubic curve fitting and so on.
  • the embodiment of the present application does not limit the above curve fitting method.
  • the above-mentioned curve segment L18 may represent the shape of the user's spine. Wherein, the curvature of the curve segment L18 may indicate the severity of scoliosis of the user.
  • the electronic device 100 may calculate the maximum curvature of the curve segment L18. The greater the maximum curvature of the curve segment L18 is, the more severe the user's scoliosis is.
  • the embodiment of the present application does not limit the method for determining the severity of scoliosis of the user by the electronic device 100 .
  • the electronic device 100 may display the scoliosis assessment result schematic diagram shown in FIG. 8I in the scoliosis assessment result.
  • the electronic device 100 may display the above fitting curve segment L18 on the user's human body image. In this way, the user can visually check whether he has scoliosis and the severity of the scoliosis through the curve segment L18.
  • the electronic device 100 can use the key points of the human body on the torso of the user and the edge points determined on the outline of the human body according to the key points of the human body to fit the shape of the spine of the user. In this way, the electronic device 100 can display a curve representing the shape of the user's spine in the scoliosis evaluation result diagram, so that the user can easily know whether he has scoliosis and the severity of the scoliosis.
  • the electronic device 100 may also collect a side view of the user, and use the side view as a posture evaluation image to evaluate the user's posture.
  • the posture assessment posture required to be maintained by the user for collecting the front view of the user may be different from the posture assessment posture required for the user to maintain for the collection of the side view of the user.
  • the implementation method of collecting a side view of the user by the electronic device 100 and using the side view to perform posture assessment will be specifically introduced below.
  • the electronic device 100 may guide the user to complete the action corresponding to the body posture evaluation posture that the user is required to maintain when capturing the side view of the user.
  • the body posture evaluation posture is described as an example in which the side of the body is facing the camera and the posture is standing naturally.
  • the electronic device 100 can continuously collect images of the user, and judge whether the user's posture is the same as the posture evaluation posture according to the user's key points and contours of the human body in the image.
  • the electronic device 100 may prompt the user to adjust the posture until the user's posture is the same as the posture evaluation posture, so as to obtain the posture evaluation image.
  • the electronic device 100 can identify key points of the user's human body in the image.
  • the electronic device 100 may determine whether the user's posture is a standing posture according to the key points of the human body.
  • the electronic device 100 may also use the left hip point and the right hip point to determine whether the side of the user's body faces the camera. Understandably, if the side of the user's body faces the camera, the distance between the user's left hip point and right hip point in the image is relatively small.
  • the electronic device 100 may determine the distance between the left hip point and the right hip point according to the positions of the left hip point and the right hip point in the pixel coordinate system.
  • the electronic device 100 may determine that the side of the user's body faces the camera. Then, the electronic device 100 may use the image whose distance between the left hip point and the right hip point is determined to be smaller than the threshold W12 as the posture evaluation image. Or, after judging that the distance between the left hip point and the right hip point is smaller than the threshold W12, the electronic device 100 may prompt the user to keep the posture unchanged, collect an image, and use the image as a posture evaluation image.
  • the aforementioned threshold W12 may be, for example, 5 pixels, 6 pixels, and so on. The embodiment of the present application does not limit the value of the threshold W12.
  • the electronic device 100 may determine that the user's posture is not the preset posture evaluation posture, and prompt the user to adjust the posture according to the difference between the user's posture and the posture evaluation posture. In a possible implementation manner, the electronic device 100 may determine the x values of the left hip point and the right hip point in the pixel coordinate system. If the x value of the right hip point is smaller than the x value of the left hip point, the electronic device 100 may prompt the user to turn right.
  • the distance between the left hip point and the right hip point is greater than or equal to the threshold W12, and the x value of the right hip point is smaller than the x value of the left hip point, which means that the user turns to the right at a certain angle from the front towards the camera.
  • the angle of the user's turn to the right is not enough, and the user's body is not completely sideways facing the camera, so the user needs to continue to turn to the right.
  • the electronic device 100 may prompt the user to turn left.
  • the electronic device 100 may also judge whether the user's body is facing the camera from the side using other methods.
  • the electronic device 100 may display the user interface 1010 shown in FIG. 10A during the process of guiding the user to complete the action corresponding to the posture assessment gesture.
  • the user interface 1010 may include a display area 1011 and a display area 1012 . in:
  • the display area 1011 can be used to display an example diagram 1011A of posture assessment poses, and action prompts 1011B. It can be seen from the example diagram 1011A of posture evaluation posture that the posture evaluation posture is a posture in which the side of the body faces the camera and stands naturally.
  • the content contained in the action prompt 1011B may be: please stand sideways to the screen. In this way, the user can adjust his posture according to the example diagram 1011A and the action prompt 1011B, so as to achieve a preset posture evaluation posture.
  • the display area 1012 can be used to display the image collected by the electronic device 100 through the camera 193 .
  • the image may include the user undergoing a posture assessment.
  • the user can view his own gestures according to the images in the display area 1012 . In this way, the user can intuitively know the difference between his posture and the posture evaluation posture, so that the posture can be adjusted quickly.
  • the posture of the user is different from the posture evaluation posture.
  • the user is facing the camera 193 in front, and is not standing sideways to the screen.
  • the electronic device 100 may judge the difference between the user's posture and the posture evaluation posture according to the embodiment shown in FIG. 9 , and display an action prompt 1011B in the display area 1011 .
  • the electronic device 100 can also play the content in the action prompt 1011B by voice, so as to guide the user to adjust the posture to a posture evaluation posture.
  • the user can adjust the posture according to the action prompt 1011B shown in FIG. 10A .
  • the electronic device 100 may display the user interface 1010 shown in FIG. 10B . From the display area 1012 shown in FIG. 10B , it can be seen that the user faces the camera from the front side shown in FIG. 10A and turns to the right at a certain angle.
  • the electronic device 100 may analyze the image of the user's posture in the display area 1012 shown in FIG. 10B according to the embodiment shown in FIG. 9 above, and determine that the angle of the user's right rotation is not enough, and the user's posture is not a posture evaluation posture.
  • the electronic device 100 may display an action prompt 1011C shown in FIG. 10B .
  • the content of the action prompt 1011C may be: please turn right.
  • the electronic device 100 can also play the content of the motion prompt 1011C by voice.
  • the user can adjust the posture according to the action prompt 1011C.
  • the electronic device 100 may display the user interface 1010 shown in FIG. 10C . It can be seen from the display area 1012 shown in FIG. 10C that the user has turned to the right at a certain angle based on the gesture in the display area 1012 shown in FIG. 10B .
  • the electronic device 100 may analyze the image of the posture of the user in the display area 1012 shown in FIG. 10C according to the above embodiment shown in FIG. 9 , and determine that the posture of the user is a posture evaluation posture.
  • the electronic device 100 may display the action prompt 1011D shown in FIG. 10C to prompt the user to maintain the posture assessment posture.
  • the content of the action prompt 1011D may be: please keep this posture, and the photo will be taken soon.
  • the electronic device 100 can also play the content of the motion prompt 1011D by voice. After the user is prompted to maintain the posture assessment posture, the electronic device 100 may collect an image, and use the collected image as a posture assessment image.
  • FIGS. 10A to 10C The above-mentioned user interface 1010 shown in FIGS. 10A to 10C is only for exemplary illustration, and should not be construed as limiting the present application.
  • the content of the electronic device 100 prompting the user to adjust the posture may vary with the change of the posture of the user.
  • the electronic device 100 may prompt the user for the difference between the user's posture and the posture evaluation posture. This allows the user to understand which part of the user is different from the posture assessment posture, so as to better guide the user to complete the action corresponding to the posture assessment posture.
  • the electronic device 100 may use the key points of the human body to determine the edge points of the user's back on the contour of the human body to determine whether the user is hunched.
  • the electronic device 100 may first judge the orientation of the user's body according to the key points of the human body. It can be understood that the user can rotate to the right from the standing posture facing the camera to complete the action corresponding to the posture of standing with the side of the body facing the camera. The user can also rotate to the left from the standing posture facing the camera to complete the action corresponding to the posture of standing with the side of the body facing the camera. That is to say, the user's body may face the negative direction of the x-axis (ie, the left side) in the pixel coordinate system, and the x value of the edge point of the chest area on the human body contour is smaller than the x value of the edge point of the back area.
  • the x-axis ie, the left side
  • the user's body may also face the positive direction of the x-axis (ie, the right side) in the pixel coordinate system, and the x value of the edge point of the chest area on the human body contour is greater than the x value of the edge point of the back area.
  • the electronic device 100 may use the user's left ankle point and right ankle point to determine the orientation of the user's body. Wherein, the electronic device 100 may compare the y value of the left ankle point and the y value of the right ankle point in the pixel coordinate system. If the y value of the left ankle point is smaller than the y value of the right ankle point, the electronic device 100 may determine that the user's body is facing the negative direction of the x axis in the pixel coordinate system.
  • the electronic device 100 may determine that the user's body is facing the positive direction of the x axis in the pixel coordinate system.
  • the embodiment of the present application does not limit the method for determining the orientation of the user's body by the electronic device 100 .
  • the user's body is oriented in the negative direction of the x-axis in the pixel coordinate system as an example for illustration.
  • the method for judging whether the user is hunched and the severity of the hunched back when the user's body is facing the negative direction of the x-axis in the pixel coordinate system is the same as the method for judging whether the user is hunched and the severity of the hunched back when the user's body is facing the positive direction of the x-axis in the pixel coordinate system.
  • the embodiment of the present application does not describe the method of judging whether the user is hunched over and the severity of the hunched back when the user's body faces the positive direction of the x-axis in the pixel coordinate system.
  • the electronic device 100 may select a back curve on the outline of the human body. Specifically, the electronic device 100 may determine a straight line L19 passing through the left shoulder point and perpendicular to the y-axis of the pixel coordinate system. The intersection of the straight line L19 and the upper back area of the human body contour is G1. That is, the intersection point G1 is an edge point on the straight line L19 whose x value in the pixel coordinate system is greater than the x value of the left shoulder point and which is closest to the left shoulder point. The electronic device 100 may divide the line segment between the neck point and the belly point into thirds, and determine the third point G2 closest to the belly point among the thirds of the line segment.
  • the electronic device 100 may determine a straight line L20 passing through the third point G2 and perpendicular to the y-axis of the pixel coordinate system.
  • the intersection of the straight line L20 and the upper back area of the human body contour is G3. That is, the intersection point G3 is an edge point on the straight line L20 whose x value in the pixel coordinate system is greater than the x value of the third point G2 and which is closest to the third point G2.
  • the electronic device 100 may select a curve line segment between the intersection point G1 and the intersection point G3 of the back area on the contour of the human body to obtain the back curve segment G1G3.
  • the back curve segment G1G3 includes edge points whose y value in the pixel coordinate system is greater than or equal to the y value of G3, and whose y value is less than or equal to the y value of G1.
  • the x values of the edge points on the back curve segment G1G3 in the pixel coordinate system are all greater than the x values of the left shoulder points.
  • the electronic device 100 may calculate the area of a circle whose diameter is the straight line segment between the intersection point G1 and the intersection point G3 .
  • the area of the circle is S1.
  • the electronic device 100 may also calculate the area of the closed area formed by the straight line segment between the intersection point G1 and the intersection point G3 and the back curve segment G1G3.
  • the area of the closed region is S2.
  • the electronic device 100 can judge whether the user is hunched according to the ratio of S2 to S1. If S2/S1 is smaller than the preset threshold W13, the electronic device 100 may determine that the user is not hunched.
  • the aforementioned threshold W13 may be, for example, 0.05, 0.1, and so on.
  • the embodiment of the present application does not limit the value of the threshold W13. If S2/S1 is greater than or equal to the threshold W13, the electronic device 100 may determine that the user is hunched over. Among them, the larger S2/S1 is, the more serious the degree of hunchback of the user is.
  • the electronic device 100 may also use edge points on the back curve segment G1G3 to perform smoothing processing on the back curve segment G1G3.
  • the embodiment of the present application does not limit the specific method of the foregoing smoothing processing.
  • the electronic device 100 may calculate the curvature of the smoothed curve segment, and use the maximum curvature as the hunchback index.
  • the electronic device 100 may select a pixel at every preset interval (such as 4 pixels, 5 pixels, etc.) on the smoothed curve segment to calculate the curvature of the curve segment at the position of this pixel. This can reduce the calculation amount of the electronic device 100 and improve the efficiency of judging whether the user is hunched over.
  • the electronic device 100 may determine that the user does not have a hunchback. If the hunchback index is greater than or equal to the preset hunchback index threshold, the electronic device 100 may determine that the user is hunchbacked. The larger the hunchback index, the more severe the hunchback of the user.
  • the embodiment of the present application does not limit the method for selecting the back curve of the electronic device 100 .
  • the electronic device 100 may also select the back curve according to the bisection point of the line segment between the neck point and the abdomen point, or the bisection point obtained by dividing the line segment between the neck point and the abdomen point into four or more equal parts.
  • the electronic device 100 may display a human body image in which the user's side faces the camera in the evaluation result of the hunched back, and mark the back curve segment G1G3 shown in FIG. 11A on the human body image.
  • the degree of curvature of the back curve segment G1G3 can reflect the severity of the user's hunchback. In this way, the user can intuitively know whether he has a hunchback and the severity of the hunchback through the back curve segment G1G3.
  • the hunchback is mainly manifested in the user's back bulge.
  • the electronic device 100 may first distinguish the chest area and the back area in the human body contour, and then select edge points in the back area to determine whether the user is hunchbacked. This can avoid confusing the edge points of the chest area and the back area during the judgment process, and improve the accuracy of the hunchback detection result.
  • the position of the edge points of the back area on the human body contour is determined, and the use of the edge points of the back area can more accurately determine whether the user has a hunchback and the severity of the hunchback.
  • the electronic device 100 may provide an option for posture assessment.
  • Options for this posture assessment may include: high and low shoulders, leg shape, scoliosis, and kyphosis.
  • the electronic device 100 may determine whether it is only necessary to take the posture assessment image of the front of the human body, or only the posture assessment image of the side of the human body, or whether it is necessary to capture the posture assessment image of the front of the human body and the posture assessment image of the side of the human body according to the posture assessment option selected by the user.
  • the electronic device 100 can guide the user to complete the actions corresponding to posture assessment posture 1 (such as facing the camera, arms straight, keeping a certain distance between the arms and the sides of the body, and legs naturally upright and close together) according to the embodiment shown in the aforementioned FIGS.
  • the electronic device 100 may perform posture assessment on the user according to the posture assessment image of the front of the human body, and obtain one or more of the assessment results of high and low shoulders, leg shape, and scoliosis.
  • the electronic device 100 can guide the user to complete the actions corresponding to the posture assessment posture 2 (such as the side of the body facing the camera and standing naturally) according to the embodiment shown in the aforementioned FIGS.
  • the electronic device 100 may perform posture assessment on the user according to the posture assessment image of the side of the human body, and obtain a hunchback assessment result.
  • the electronic device 100 may guide the user to complete the actions corresponding to the posture assessment gesture 1 and posture assessment gesture 2 above, so as to obtain the posture assessment image of the front of the human body and the posture assessment image of the side of the human body. Then, the electronic device 100 can perform body posture assessment on the user, and obtain assessment results of high and low shoulders, leg shape, scoliosis, and hunchback.
  • the electronic device 100 may guide the user to complete the action corresponding to the posture assessment gesture 3 .
  • the above posture assessment posture 3 may be a posture in which the arms are kept at a certain distance from the sides of the body, and the legs are naturally upright and close together.
  • the posture assessment posture 3 above can also be a posture in which the arms are straightened, the arms are kept at a certain distance from the sides of the body, and the legs are naturally upright and close together.
  • the electronic device 100 can take images from multiple angles (such as a front view, a side view, etc.) while the user maintains the body posture assessment pose 3 .
  • the electronic device 100 may first guide the user to complete the actions corresponding to the posture assessment posture 1 above, and collect a frontal view of the user maintaining the posture assessment posture 3 . Then, the electronic device 100 may prompt the user to maintain the posture assessment posture 3 and rotate 90° to the left (or right), and obtain a side view of the user maintaining the posture assessment posture 3 . The electronic device 100 may also prompt the user to maintain the posture assessment posture 3 to rotate at an angle other than 90° (such as 45°, etc.), and collect images from other angles where the user maintains the posture assessment posture 3 .
  • an angle other than 90° such as 45°, etc.
  • the electronic device 100 can perform 3D modeling by using the above-mentioned images from multiple angles to obtain a 3D human body model whose pose is posture evaluation pose 3 .
  • the electronic device 100 can use the above-mentioned front view and side view of the 3D human body model to perform posture assessment, and obtain the assessment results of high and low shoulders, leg shape, scoliosis, and hunchback.
  • For the implementation method of posture assessment using the front view and side view of the 3D human body model reference may be made to the implementation method of posture assessment using the posture assessment image of the front of the human body and the posture assessment image of the side of the human body in the foregoing embodiment. I won't go into details here.
  • the human body contour determined by using the 3D human body model is usually smoother than the human body contour determined by directly segmenting the image obtained by the user, which can better reflect the user's posture and improve the accuracy of body posture assessment.
  • FIG. 12A to FIG. 12E exemplarily show user interface diagrams of some posture assessment results provided by the embodiment of the present application.
  • the electronic device 100 may collect images of the user and perform posture assessment.
  • the electronic device 100 may store body posture evaluation results. In this way, the user can view the results of this and past body assessments.
  • the electronic device 100 may display a user interface 1210 .
  • User interface 1210 may be used for a user to view the results of one or more posture assessments.
  • the user interface 1210 may include a title 1211 , a body posture analysis report option 1212 , a body posture analysis report option 1213 , a body posture analysis report option 1214 , and a human body diagram 1215 . in:
  • Title 1211 may be used to indicate what user interface 1210 contains.
  • the title 1211 may be a text identification "Historical Report”. That is, the user can use the user interface 1210 to find out the result of the posture assessment that has been performed by the user.
  • the body posture analysis report options 1212-1214 may indicate the body posture evaluation results obtained by the user at different times.
  • the body posture analysis report option 1212 may include the time when the body posture evaluation result corresponding to this option was obtained (eg, 10:22 on December 12, 2021). This time is also the time when the user performs posture assessment.
  • the body posture analysis report option 1212 can be used to trigger the electronic device 100 to display the body posture evaluation result corresponding to this option.
  • body posture analysis report option 1213 and body posture analysis report option 1214 please refer to the introduction of body posture analysis report option 1212.
  • the human body diagram 1215 can be used to display the user's human body image.
  • the human body image may be a human body image in an image collected by the electronic device 100 .
  • the human body image may be a user's human body outline.
  • the human body image may be a 3D human body model obtained by performing 3D modeling on the user.
  • the embodiment of the present application does not limit the display form of the human body image.
  • the human body image may be marked with a body posture evaluation result corresponding to the selected body posture analysis report option. For example, the Posture Analysis Report option 1212 is selected.
  • the user has high and low shoulders, hunchback and scoliosis, but the leg shape is normal.
  • the shoulders are marked with high and low shoulders
  • the corresponding parts of the back are marked with hunchback
  • the parts of the trunk are marked with scoliosis
  • the parts of the legs are marked with normal leg shape. In this way, the user can quickly know the body posture evaluation results obtained by the body posture evaluation performed by the user on the user interface 1210 .
  • the electronic device 100 may display a user interface 1220 shown in FIG. 12B .
  • User interface 1220 may include title 1221, time 1222, high and low shoulders option 1223, scoliosis option 1224, leg type option 1225, hunchback option 1226, posture assessment result schematic diagram 1227, shoulder schematic diagram 1223A, high and low shoulder result analysis display area 1223B, and course recommendation display area 1223C. in:
  • Title 1221 may be used to indicate what user interface 1220 contains.
  • the title 1221 may be a text identifier "posture assessment report”.
  • Time 1222 may be used to indicate when the posture assessment report presented by user interface 1220 is available. For example, December 12, 2021 at 10:22.
  • the posture assessment results diagram 1227 may be used to indicate posture problems of the user.
  • the body posture evaluation result diagram 1227 may include an image of a human body facing the camera.
  • the human body image may contain marks (such as dots, lines, etc.) indicating posture.
  • the electronic device 100 can display the posture evaluation result diagram 1227 shown in FIG. 12B.
  • the human body image of the posture evaluation result diagram 1227 may include marks indicating whether the user has high and low shoulders, marks indicating whether the user has scoliosis, and marks indicating the shape of the user's legs. For these marks, reference may be made to the embodiments shown in FIG. 8B , FIG. 8D , and FIG. 8I . I won't go into details here.
  • the high and low shoulders option 1223, the scoliosis option 1224, the leg shape option 1225, and the hunchback option 1226 can be used for the user to view the high and low shoulder evaluation results, scoliosis evaluation results, leg shape evaluation results, and hunchback evaluation results obtained from the user's posture evaluation.
  • the electronic device 100 can display the above-mentioned shoulder schematic diagram 1223A, high and low shoulder result analysis display area 1223B, and course recommendation display area 1223C on the user interface 1220 .
  • the shoulder schematic diagram 1223A may be a partial enlarged diagram of the shoulder in the above-mentioned posture assessment result schematic diagram 1227.
  • the shoulder schematic diagram 1223A can facilitate the user to check the situation of his own shoulder more clearly.
  • the high and low shoulder result analysis display area 1223B can display the high and low shoulder result analysis.
  • the result analysis of the high and low shoulders may include which shoulder is higher and the height difference (eg, the right shoulder is higher than the left shoulder: 0.9 cm), the severity of the high and low shoulders (eg, mild) and so on.
  • the embodiment of the present application does not limit the content of the high and low shoulder result analysis.
  • the electronic device 100 may also recommend courses for the user to improve the high and low shoulders.
  • the course recommendation display area 1223C may include courses recommended by the electronic device 100 described above.
  • the electronic device 100 may display a trunk diagram 1224A, a scoliosis result analysis display area 1224B, and a course recommendation display area 1224C on the user interface 1220 .
  • the torso schematic diagram 1224A may be a partial enlarged diagram of the chest and abdomen in the aforementioned posture assessment result schematic diagram 1227 .
  • the torso diagram 1224A can facilitate the user to see more clearly whether he has a problem of scoliosis.
  • a scoliosis result analysis may be displayed in the scoliosis result analysis display area 1224B.
  • the scoliosis result analysis may include the severity of the scoliosis (eg, moderate) and the like.
  • the electronic device 100 may also recommend courses for improving scoliosis to the user.
  • the course recommendation display area 1224C may include courses recommended by the electronic device 100 described above.
  • the electronic device 100 may display a leg schematic diagram 1225A and a leg shape result analysis display area 1225B on the user interface 1220 .
  • the leg schematic diagram 1225A may be a local method diagram of the leg in the above-mentioned posture assessment result schematic diagram 1227 .
  • the leg schematic diagram 1225A can facilitate users to view the shape of their own legs more clearly.
  • Leg shape result analysis display area 1225B may display leg shape result analysis.
  • the leg type result analysis may include the user's leg type (such as normal leg type, X-type leg, O-type leg, XO-type leg), the severity when the user's leg type is X-type leg or O-type leg or XO-type leg, and the like.
  • the electronic device 100 may also recommend courses for the user to improve X-shaped legs, O-shaped legs, or XO-shaped legs.
  • the electronic device 100 may display on the user interface 1220 a schematic diagram of posture assessment results 1228 , a schematic diagram of the back 1226A, a display area 1226B for analysis of hunchback results, and a display area 1226C for course recommendations.
  • the posture evaluation result diagram 1228 may be used to indicate the user's posture problems.
  • the body posture assessment result diagram 1228 may include a human body image of the user's side facing the camera.
  • the human body image may contain marks (such as dots, lines, etc.) indicating posture.
  • the electronic device 100 may display a schematic diagram of the posture evaluation result 1228 shown in FIG. 12E .
  • the body image of the body posture assessment result diagram 1228 may include a curve indicating the curvature of the user's back.
  • the schematic diagram of the back 1226A may be a partially enlarged diagram of the back in the schematic diagram 1228 of the above-mentioned body posture assessment results.
  • the back diagram 1226A can facilitate the user to view the shape of his back more clearly from the side of the body.
  • the hunchback result analysis display area 1226B may display a hunchback result analysis.
  • the result analysis of kyphosis may include kyphosis coefficient, severity of kyphosis (such as mildness) and the like.
  • the electronic device 100 may also recommend a course for the user to improve the hunchback.
  • the course recommendation display area 1226C may include courses recommended by the electronic device 100 described above.
  • the user interface 1210 shown in FIG. 12A-FIG. 12E is only an exemplary introduction, and should not limit the present application.
  • the electronic device 100 can display the results of one or more posture assessments performed by the user, and recommend courses for improving posture problems for the user according to the corresponding posture problems. In this way, the user can know his own posture according to the result of the posture assessment performed by the electronic device 100 , and timely adjust his posture according to the courses recommended by the electronic device 100 , thereby improving his health level.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供基于图像的姿态判断方法及电子设备。电子设备可以确定图像中第一对象的人体关键点和人体轮廓。结合人体关键点和人体轮廓,电子设备可以判断用户的姿态是否为预设的姿态。上述方法可以提高姿态判断的准确性。

Description

基于图像的姿态判断方法及电子设备
本申请要求于2022年01月21日提交中国专利局、申请号为202210073959.2、申请名称为“基于图像的姿态判断方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及基于图像的姿态判断方法及电子设备。
背景技术
近年来越来越多的上班族、学生族由于工作学习等需要,往往会长期久坐,并缺乏运动。这很容易造成特定部位的肌肉等组织紧张、关节活动度下降,从而导致高低肩、X和/或O型腿、驼背、脊柱侧弯等体态问题。不良的体态问题不仅会影响个人形象,还影响身体健康。对用户的姿态进行判断可以帮助用户了解自己是否有体态问题,并及时进行调整。
发明内容
本申请提供基于图像的姿态判断方法及电子设备。该方法可以通过识别图像中用户的人体关键点和人体轮廓,来判断用户的姿态,提高姿态判断的准确性。
第一方面,本申请提供一种基于图像的姿态判断方法。该方法可应用于电子设备。该电子设备包含摄像头。电子设备可通过摄像头采集第一图像。电子设备可根据第一图像识别第一对象,并确定第一对象的第一组人体关键点。电子设备可根据第一图像和第一对象获得第二图像,其中,第二图像中第一对象所在区域的像素的像素值与第二图像中其它像素的像素值不同。根据第一组人体关键点和第二图像,电子设备可判断第一对象的姿态是否为第一姿态。
可以看出,电子设备可以对上述第一图像进行二值化处理得到第二图像,将第一图像中第一对象所在的区域和不包含第一对象的区域区别开来。上述第二图像可以反映第一对象的轮廓。第一对象保持不同的姿态,第一对象的轮廓也不同。相比于仅根据人体关键点判断用户的姿态,电子设备结合上述第一组人体关键点和上述第二图像来判断用户的姿态可以提高姿态判断的准确性。
结合第一方面,在一些实施例中,上述根据第一组人体关键点和第二图像,判断第一对象的姿态是否为第一姿态的具体方法可以为:电子设备可以根据第二图像确定第一人体轮廓,第一人体轮廓包括第二图像上,具有第一像素值的像素和具有第二像素值的像素的交界处的像素,第二图像上具有第一像素值的像素为第二图像中第一对象所在区域的像素,第二图像中具有第二像素值的像素为第二图像中其它像素。根据第一组人体关键点和第一人体轮廓,电子设备可以判断第一对象的姿态是否为第一姿态。
可以看出,电子设备可以从上述第二图像中确定出第一人体轮廓,并结合第一组人体关键点和第一人体轮廓判断用户的姿态是否为第一姿态。第一人体轮廓上各边缘点的位置是唯一确定的。上述方法可以提高姿态判断的准确性。
结合第一方面,在一些实施例中,上述根据第一组人体关键点和第一人体轮廓,判断第 一对象的姿态是否为第一姿态的方法可以为:电子设备可以根据第一组人体关键点在第一人体轮廓上确定出第一组边缘点,并根据第一组边缘点判断第一对象的姿态是否为第一姿态。
结合第一方面,在一些实施例中,在通过摄像头采集上述第一图像之前,电子设备还可以显示第一界面,第一界面包含第一提示信息,第一提示信息用于提示第一对象保持第一姿态。
结合第一方面,在一些实施例中,在根据第一组人体关键点和第二图像,判断第一对象的姿态是否为第一姿态之后,在判断出第一对象的姿态不为第一姿态的情况下,电子设备还可以显示第二界面,第二界面包含第二提示信息,第二提示信息是根据第一对象的姿态与第一姿态的第一差别确定的,第二提示信息用于提示第一对象消除第一差别。
由上述实施例可以看出,电子设备可以在界面上显示相关提示信息来提示第一对象保持第一姿态。当判断出第一对象的姿态不为第一姿态,电子设备可以根据第一对象的姿态和第一姿态的差别引导用户完成第一姿态对应的动作,从而保持第一姿态。上述方法可以帮助用户保持第一姿态。
可选的,处理在界面上显示相关的提示信息,电子设备还可以通过语音提示等方式提示用户调整姿态,以完成第一姿态对应的动作。
结合第一方面,在一些实施例中,在显示第一界面之前,电子设备还可以显示第三界面,第三界面包括第一选项。电子设备接收到对第一选项的第一操作。上述第一界面可以是电子设备基于第一操作显示的。
可以看出,电子设备可以根据用户对上述第一选项的选择,来提示用户保持第一姿态。
例如,上述第一选项可以是用于评估高低肩、X型腿、O型腿、XO型腿、脊柱侧弯中一项或多项体态问题的选项。响应于对第一选项的操作,电子设备可以显示上述第一界面,来提示用户保持第一姿态。该第一姿态可以例如是面朝摄像头(也即身体正面朝向摄像头),双臂置于身体两侧且与身体保持一定的距离,双腿自然直立且并拢的姿态。
再例如,上述第一选项可以是用于评估驼背的选项。响应于对第一选项的操作,电子设备可以显示上述第一界面,来提示用户保持第一姿态。该第一姿态可以例如是身体侧面朝向摄像头,自然站立的姿态。
也即是说,用户可以选择进行体态评估的项目。电子设备可以根据用户的选择指示用户完成相应的动作。这样,当用户选择评估的体态问题仅需用户保持一个姿态时,用户可以仅完成这一个姿态对应的动作,而无需完成评估其它体态问题需要完成的动作。
结合第一方面,在一些实施例中,在判断出第一对象的姿态为第一姿态的情况下,电子设备可以根据第一组人体关键点和第二图像,判断第一对象是否有第一体态问题。
其中,电子设备可以根据上述第二图像确定出第一对象的第一人体轮廓。根据上述第一组人体关键点和第一人体轮廓,电子设备可以判断第一对象是否有第一体态问题。
在一些实施例中,电子设备可以根据上述第一组人体关键点在上述第一人体轮廓上确定出第一组边缘点,并根据该第一组边缘点来判断第一对象是否有第一体态问题。
可以理解的,第一人体轮廓上边缘点的位置都是确定的。电子设备利用上述第一组边缘点来判断用户的姿态是否为具有体态问题的姿态,可以更准确地判断出用户是否具有体态问题。这可以帮助用户更好地了解自己的体态并及时矫正体态问题。
结合第一方面,在一些实施例中,电子设备还可以显示第四界面,第四界面包含第三提示信息,第三提示信息用于提示第一对象保持第二姿态。
其中,上述第三界面还包括第二选项。在显示第四界面之前,电子设备还接收到对第二 选项的第二操作。上述第四界面可以是电子设备基于第二操作显示的。
可选的,在判断出第一对象的姿态为第一姿态的情况下,电子设备可以直接显示上述第四界面。
结合第一方面,在一些实施例中,在显示第四界面之后,电子设备还可以通过摄像头采集第三图像。电子设备可以根据第三图像识别第一对象,并确定第一对象的第二组人体关键点。根据第二组人体关键点,电子设备可以判断第一对象的姿态是否为第二姿态。第二姿态与第一姿态不同。
结合第一方面,在一些实施例中,在判断出第一对象的姿态不为第二姿态的情况下,电子设备可以显示第五界面。上述第五界面可包含第四提示信息。上述第四提示信息可以是根据第一对象的姿态与第二姿态的第二差别确定的。上述第二提示信息用于提示第一对象消除第二差别。
结合第一方面,在一些实施例中,在判断出第一对象的姿态为第二姿态的情况下,电子设备可以根据第三图像和第一对象获得第四图像,其中,第四图像中第一对象所在区域的像素的像素值与第四图像中其它像素的像素值不同。根据第二组人体关键点和第四图像,电子设备可以判断第一对象是否有第二体态问题,第二体态问题与第一体态问题不同。
可以看出,电子设备可以结合上述第二组人体关键点和第四图像来判断第一对象的姿态,从而确定第一对象是否有体态问题。这可以提高体态评估结果的准确性。从而帮助用户更好地了解自己的体态并及时矫正体态问题。
结合第一方面,在一些实施例中,电子设备可以根据上述第四图像确定第二人体轮廓。上述第二人体轮廓包括第四图像上,具有第一像素值的像素和具有第二像素值的像素的交界处的像素。第四图像中具有第一像素值的像素为第四图像中第一对象所在的区域的像素。第四图像中具有第二像素值的像素为第四图像中其它像素(即第四图像中不包含第一对象的区域的像素)。电子设备可以根据第二组人体关键点和上述第二人体轮廓来判断第一对象是否有第二体态问题。
结合第一方面,在一些实施例中,电子设备可以根据第二组人体关键点在第二人体轮廓上确定出第二组边缘点,并根据第二组边缘点判断第一对象是否有第二体态问题。
结合第一方面,在一些实施例中,第一姿态包括:双臂置于身体两侧且与腰部保持的距离在第一距离范围内。第一组人体关键点包括:第一左手腕点和第一右手腕点。上述根据第一组人体关键点和第一人体轮廓,判断第一对象的姿态是否为第一姿态的具体方法可以为:电子设备可以判断第一左手腕点和第一右手腕点所在的第一直线与第一人体轮廓是否有6个交点。其中,上述第一直线与上述第一人体轮廓有6个交点表示所述用户的双臂置于身体两侧且与腰部保持的距离在第一距离范围内。
结合第一方面,在一些实施例中,第一姿态包括:双腿自然直立且并拢。第一组人体关键点包括:第一左髋点、第一左膝点、第一左脚踝点、第一右髋点、第一右膝点、第一右脚踝点。上述根据第一组人体关键点和第一人体轮廓,判断第一对象的姿态是否为第一姿态的具体方法可以为:电子设备可以根据第一左髋点和第一左膝点之间的第一左腿距离、第一左膝点和第一左脚踝点之间的第二左腿距离、第一左髋点和第一左脚踝点之间的第三左腿距离,判断第一左腿距离加第二左腿距离减第三左腿距离得到的第一距离是否小于第一阈值。电子设备可以根据第一右髋点和第一右膝点之间的第一右腿距离、第一右膝点和第一右脚踝点之间的第二右腿距离、第一右髋点和第一右脚踝点之间的第三右腿距离,判断第一右腿距离加第二右腿距离减第三右腿距离得到的第二距离是否小于第一阈值。其中,上述第一距离和所 述第二距离均小于所述第一阈值表示所述用户的双腿自然直立。
进一步的,电子设备可以判断第一左膝点和第一右膝点之间的第一线段与第一人体轮廓是否有2个交点。当第一线段与第一人体轮廓有2个交点,电子设备可以判断第一线段与第一人体轮廓的2个交点之间的第三距离是否小于第二阈值。电子设备可以判断第一左脚踝点和第一右脚踝点之间的第二线段与第一人体轮廓是否有2个交点。当第二线段与第一人体轮廓有2个交点,电子设备可以判断第二线段与第一人体轮廓的2个交点之间的第四距离是否小于第三阈值。上述第一线段与第一人体轮廓的交点少于2个,或上述第三距离小于第二阈值,或上述第二线段与第一人体轮廓的交点少于2个,或上述第四距离小于第三阈值,可以表示用户的双腿并拢。
在一些实施例中,当判断出上述第一线段与第一人体轮廓的交点数量少于2个,或者判断出第一线段与第一人体轮廓的交点数量有2个且这2个交点之间的第三距离小于第二阈值,电子设备可以确定出用户的双腿已经并拢。例如,正常腿型、XO型腿、X型腿中任意一种腿型的用户双腿并拢时均满足上述第一线段与第一人体轮廓的交点数量少于2个,或者上述第三距离小于第二阈值的条件。这样,电子设备可以不用再对上述第二线段与第一人体轮廓是否有交点进行判断。这可以提高判断的效率。
当判断出上述第三距离的大于或等于第二阈值,电子设备可以判断利用上述第一左脚踝点和第一右脚踝点来判断用户的双腿是否并拢。例如,腿型为O型腿的用户在双腿并拢时不满足上述第一线段与第一人体轮廓的交点数量少于2个,或者上述第三距离小于第二阈值的条件,但满足上述第二线段与第一人体轮廓的交点少于2个,或者上述第四距离小于第三阈值的条件。
在一些实施例中,电子设备可以先判断上述第二线段与第一人体轮廓的交点是否有2个。在第二线段与第一人体轮廓的交点有2个的情况下,电子设备可以判断这两个交点之间的第四距离是否小于第三阈值。在判断出上述第四距离大于或等于第三距离的情况下,电子设备可以判断上述第一线段与第一人体轮廓的交点是否有2个。在第一线段与第一人体轮廓的交点有2个情况下,电子设备可以判断这两个交点之间的第三距离是否小于第二阈值。
在一些实施例中,电子设备还可以根据上述第一组人体关键点来判断用户的身体是否正面朝向摄像头。具体的,第一组人体关键点可包括第一左髋点和第一右髋点。电子设备可以判断第一左髋点和第一右髋点之间的距离是否大于预设的距离阈值。第一左髋点和第一右髋点之间的距离大于预设的距离阈值,可以表示用户的身体正面朝向摄像头。
结合第一方面,在一些实施例中,第一体态问题可以包括以下一项或多项:高低肩、X型腿、O型腿、XO型腿、脊柱侧弯。
结合第一方面,在一些实施例中,第一组人体关键点包括第一左肩点和第一右肩点,第一组边缘点包括第一左肩边缘点和第一右肩边缘点,第一左肩边缘点是根据第一左肩点在第一人体轮廓上的左肩区域确定的,第一右肩边缘点是根据第一右肩点在第一人体轮廓上的右肩区域确定的。
在一种可能的实现方式中,电子设备可以确定上述第一组人体关键点和第一人体轮廓上的边缘点(也即像素)在同一个像素坐标系中的位置。电子设备可以在经过上述第一左肩点且与水平线垂直的直线A上,确定出第一人体轮廓上在像素坐标系的纵轴(即y轴)上的值大于第一左肩点在y轴上的值,且与第一左肩点最接近的一个边缘点。电子设备可以将这一个边缘点确定为第一左肩边缘点。上述直线A的方向可以为y轴的方向。同样的,电子设备在经过上述第一右肩点且与水平线垂直的直线B上,确定出第一人体轮廓上在像素坐标系的 y轴上的值大于第一右肩点在y轴上的值,且与第一右肩点最接近的一个边缘点。电子设备可以将这一个边缘点确定为第一右肩边缘点。
上述根据第一组人体关键点和第二图像,判断第一对象是否有第一体态问题的具体方法可以为:确定第一左肩边缘点和第一右肩边缘点所在的直线与水平线的第一夹角,判断第一夹角是否大于第一角度阈值。上述第一夹角大于第一角度阈值可以表示用户有高低肩。
由上述实施例可知,电子设备可以根据人体关键点在人体轮廓的肩部确定边缘点,并利用该边缘点来比较进行体态评估的用户双肩的方向与正常体态的人体双肩的方向,得到用户的高低肩评估结果。由于边缘点的位置是确定的,上述方法可以减少检测到的人体关键点的位置浮动对高低肩评估结果的影响,提高高低肩评估结果的准确率。
结合第一方面,在一些实施例中,第一组人体关键点包括第一左膝点、第一右膝点、第一左脚踝点和第一右脚踝点,第一组边缘点包括第一左膝内边缘点、第一右膝内边缘点、第一左脚内边缘点、第一右脚内边缘点、M对小腿内边缘点;第一左膝内边缘点和第一右膝内边缘点是第一左膝点和第一右膝点之间的线段与第一人体轮廓的交点,第一左脚内边缘点和第一右脚内边缘点是第一左脚踝点和第一右脚踝点之间的线段与第一人体轮廓的交点;M对小腿内边缘点中的一对小腿内边缘点包含位于同一高度的左小腿内边缘点和右小腿内边缘点,左小腿内边缘点是第一人体轮廓上第一左膝内边缘点和第一左脚内边缘点之间的像素,右小腿内边缘点是第一人体轮廓上第一右膝内边缘点和第一右脚内边缘点之间的像素,M为正整数。
上述根据第一组人体关键点和第二图像,判断第一对象是否有第一体态问题的具体方法可以为:确定第一左膝内边缘点和第一左脚内边缘点之间的第五距离,以及第一左膝内边缘点和第一右膝内边缘点之间的第六距离,并判断第六距离比第五距离的第一比值是否大于第四阈值。上述第一比值大于第四阈值可以表示用户为O型腿。
在第一比值小于或等于第四阈值的情况下,确定第一左脚内边缘点和第一右脚内边缘点之间的第七距离,并判断第七距离比第六距离的第二比值是否大于第五阈值。上述第二比值大于第五阈值可以表示用户为X型腿。
在第二比值小于或等于第五阈值的情况下,判断M对小腿内边缘点中任意一对小腿内边缘点之间的距离比第六距离的比值是否大于第六阈值。上述M对小腿内边缘点中任意一对想退内边缘点之间的距离比第六距离的比值均大于第六阈值,可以表示用户为XO型腿。否则,电子设备可以确定用户的腿型为正常腿型。
可以理解,若上述第一左膝点和第一右膝点之间的线段与上述第一人体轮廓没有交点,那么,上述第六距离的值可以为0。若上述第一左脚踝点和上述第一右脚踝点之间的西那段与上述第一人体轮廓没有交点,那么,上述第七距离的值可以为0。
由上述实施例可以看出,在评估用户腿型的过程中,电子设备可以对双膝内侧的边缘点之间的距离、双足内脚踝的边缘点之间距离左小腿和右小腿内侧的边缘点之间的距离进行归一化处理。即通过上述距离与第六距离的比值来判断用户的腿型。上述归一化处理可以使得上述评估腿型的方法适用于不同的用户。通常的,若两个用户的腿型为同等程度的O型腿,在双腿自然直立且并拢的情况下,腿越长的用户,双膝之间的距离越大。若两个用户的腿型为同等程度的X型腿,在双腿自然直立且并拢的情况下,腿越长的用户,双足内脚踝之间的距离越大。若两个用户的腿型为同等程度的XO型腿,在双腿自然直立且并拢的情况下,腿越长的用户,左小腿与右小腿之间的距离越大。上述方法可以更加准确地评估用户的腿型。
结合第一方面,在一些实施例中,第一组人体关键点包括第一左肩点、第一右肩点、第 一左髋点和第一右髋点,第一组边缘点包括第一左肩边缘点、第一右肩边缘点、第一左髋边缘点和第一右髋边缘点,第一左肩边缘点是根据第一左肩点在第一人体轮廓上的左肩区域确定的,第一右肩边缘点是根据第一右肩点在第一人体轮廓上的右肩区域确定的,第一左髋边缘点是根据第一左髋点在第一人体轮廓上的左腰区域确定的,第一右髋边缘点是根据第一右髋点在第一人体轮廓上的右腰区域确定的。上述根据第一组人体关键点和第二图像,判断第一对象是否有第一体态问题的具体方法可以为:确定第一左肩边缘点和第一左髋边缘点之间的第八距离,确定第一右肩边缘点和第一右髋边缘点之间的第九距离。判断第八距离和第九距离中的较小者比较大者的第三比值是否大于第七阈值。上述第三比值大于第七阈值可以表示用户有脊柱侧弯。
结合第一方面,在一些实施例中,第一体态问题包括:驼背。
结合第一方面,在一些实施例中,第一组边缘点包括组成第一人体轮廓上背部区域的第一曲线段的边缘点。上述根据第一组人体关键点和第二图像,判断第一对象是否有第一体态问题的具体方法可以为:确定第一曲线段的两个端点之间的第一直线段与第一曲线段所构成闭合区域的第一面积,确定以第一直线段为直径的圆的第二面积,判断第一面积比第二面积的第四比值是否大于第八阈值。上述第四比值大于第八阈值可以表示用户驼背。或者,对第一曲线段进行平滑处理,得到第一平滑曲线段,判断第一平滑曲线段的最大曲率是否大于第九阈值。上述第一平滑曲线段的最大区域大于第九阈值可以表示用户驼背。
结合第一方面,在一些实施例中,在确定上述第一曲线段的过程中,电子设备可以判断第一对象的身体的朝向。例如,电子设备可以根据第一左脚踝点和第一右脚踝点在像素坐标系的y轴上的值的大小来判断第一对象的身体的朝向。其中,若左脚踝点的y值小于右脚踝点的y值,电子设备可以确定用户的身体在像素坐标系中朝向x轴的负方向。若左脚踝点的y值大于右脚踝点的y值,电子设备可以确定用户的身体在像素坐标系中朝向x轴的正方向。进一步的,电子设备可以根据第一组关键点在第一人体轮廓的背部区域确定上述第一曲线段。
由上述实施例可知,驼背主要体现在用户的背部凸起。电子设备可以先区分人体轮廓中的胸部区域和背部区域,进而在背部区域选取边缘点来判断用户是否驼背。这可以避免在判断过程中将胸部区域和背部区域的边缘点混淆,提高驼背检测结果的准确率。并且,人体轮廓上背部区域的边缘点的位置是确定的,利用背部区域的边缘点可以更加准确地判断用户是否驼背以及驼背的严重程度。
第二方面,本申请提供一种电子设备,该电子设备可包括摄像头、存储器和处理器,其中,该摄像头可用于采集图像,该存储器可用于存储计算机程序,该处理器可用于调用该计算机程序,使得该电子设备执行如第一方面中任一可能的实现方法。
第三方面,本申请提供一种计算机可读存储介质,包括指令,当该指令在电子设备上运行,使得该电子设备执行如第一方面中任一可能的实现方法。
第四方面,本申请提供一种计算机程序产品,该计算机程序产品可包含计算机指令,当该计算机指令在电子设备上运行,使得该电子设备执行如第一方面中任一可能的实现方法。
第五方面,本申请提供一种芯片,该芯片应用于电子设备,该芯片包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行如第一方面中任一可能的实现方法。
可以理解地,上述第二方面提供的电子设备、第三方面提供的计算机可读存储介质、第四方面提供的计算机程序产品、第五方面提供的芯片均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
图1是本申请实施例提供的一种不同腿型的示意图;
图2是本申请实施例提供的一种人体关键点的示意图;
图3A和图3B是本申请实施例提供的一种判断高低肩的场景图;
图4是本申请实施例提供的一种电子设备100的结构示意图;
图5是本申请实施例提供的一种电子设备100的软件结构框图;
图6A是本申请实施例提供的一种电子设备100采集体态评估图像的方法流程图;
图6B是本申请实施例提供的一种电子设备100进行图像处理的示意图;
图6C是本申请实施例提供的一种二值化的图像示意图;
图6D是本申请实施例提供的一种判断用户的姿态是否为体态评估姿态的示意图;
图7A~图7D是本申请实施例提供的一些电子设备100提示用户调整姿态的场景示意图;
图8A~图8I是本申请实施例提供的一些对用户进行体态评估的场景示意图;
图9是本申请实施例提供的另一种判断用户的姿态是否为体态评估姿态的示意图;
图10A~图10C是本申请实施例提供的另一些电子设备100提示用户调整姿态的场景示意图;
图11A和图11B是本申请实施例提供的另一些对用户进行体态评估的场景示意图;
图12A~图12E是本申请实施例提供的一些体态评估结果的用户界面示意图。
具体实施方式
下面结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个或两个以上(包含两个)。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。术语“连接”包括直接连接和间接连接,除非另外说明。“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。
在本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。
由于工作、学习等需要,越来越多的人群出现长期久坐、坐姿不恰当、缺乏运动等情况。上述情况往往容易导致高低肩、X和/或O型腿、驼背、脊柱侧弯等体态问题。本申请实施例提供基于图像的姿态判断方法,来帮助用户了解自己的体态问题。这样,用户可以及时矫正自己的体态问题,从而保持良好的身体健康。
为了便于理解,这里先对本申请涉及的一些体态问题进行介绍。
1、高低肩
高低肩是指人体的左肩与右肩高度不同的现象。体态正常的用户的双肩高度通常是相同的。当在自然站立的状态下,若用户的左肩高于右肩或者右肩高于左肩,则用户具有高低肩的体态问题。不正确的背包姿势或者长期背单肩包可能会导致高低肩。
2、X型腿、O型腿、XO型腿
请参考图1,图1为正常腿型、X型腿、O型腿和XO型腿的示意图。
如图1所示,在双腿自然直立且并拢的情况下,双腿的线条形状可以反映出用户的腿型。若用户为正常腿型,则在双腿自然直立且并拢的情况下,用户的双膝可以贴近,双足内脚踝可以贴近,双腿的线条比较直立且相互靠拢,左小腿和右小腿内侧的部分区域可以贴近。
若用户为X型腿,则在双腿自然直立且并拢的情况下,用户的双膝可以贴近,双足内脚踝分离而不能贴近,左腿与右腿的线条呈现“X”形。X型腿也即“膝外翻”。通常的,双足内脚踝间隔距离1.5厘米以上可以认为是X型腿。上述双膝贴近可以包括双膝内侧相碰、双膝内侧之间的距离特别小(例如小于预设的距离阈值)。
若用户为O型腿,则在双腿自然直立且并拢的情况下,用户的双足内脚踝可以贴近,双膝分离而不能贴近,左腿与右腿的线条呈现“O”形。O型腿也即“膝内翻”。
若用户为XO型腿,则在双腿自然直立且并拢的情况下,用户的双膝可以贴近,双足内脚踝可以贴近,但左小腿和右小腿分离而不能贴近,小腿肚呈外翻的形态。
不恰当的坐姿(如跷二郎腿等)、不恰当的走路姿势(如走路外八、走路内八等)、长期穿高跟鞋等可能会导致腿部畸形,出现上述X型腿或O型腿或XO型腿。
3、脊柱侧弯
脊柱侧弯是指人的脊椎有侧向(如向左或向右)的弯曲,弯曲的形状包括S形和C形。从用户的正面或背面看,体态正常的用户的脊椎通常是直立的。
4、驼背
驼背是指由胸椎后突所引起的形态改变,是一种场景的脊柱变形。驼背表现为背部凸起,过度向背部所在的方向弯曲。
不限于上述高低肩、X型腿、O型腿、XO型腿、脊柱侧弯和驼背,体态问题还可以包括颈前伸、骨盆前倾、长短腿、膝过伸等等。
在一种可能的实现方式中,电子设备可以通过识别人体关键点来判断用户的姿态。人体关键点也可称为人体骨骼节点、人体骨骼点等。图2示例性示出了本申请实施例提供的人体关键点的位置分布图。如图2所示,人体关键点可以包括:头部点、颈部点、左肩点、右肩点、右肘点、左肘点、右手腕点、左手腕点、腹部点、右髋点、左髋点、左右髋中间点、右膝点、左膝点、右脚踝点、左脚踝点。不限于上述人体关键点,本申请实施例中还可以包括其他的人体关键点。
上述人体关键点可以是二维(2dimensions,2D)关键点,也可以是三维(3 dimensions, 3D)关键点。上述2D关键点可以表示分布在2D平面上的关键点。上述2D平面可以是用于进行人体关键点识别的图像所在的图像平面。上述3D关键点可以表示分布在3D空间中的关键点。相比于2D关键点,3D关键点还包括深度信息。上述深度信息可以反映出人体关键点相对于采集图像的摄像头的远近程度。
本申请实施例对电子设备识别人体关键点的实现方法不作限定。例如,电子设备可以利用人体关键点检测模型来识别人体关键点。上述人体关键点检测模型可以是基于神经网络的模型。将一张包含人像的图像输入人体关键点检测模型,人体关键点检测模型可以输出图像上人像的人体关键点的位置信息。
人体关键点可以反映人体的姿态。当用户具有上述实施例中的一种或多种体态问题,用户的姿态通常与正常体态的人体的姿态不同。上述正常体态的人体可以指没有高低肩、X型腿、O型腿、XO型腿、脊柱侧弯、驼背等体态问题的人体。
那么,电子设备可以通过采集用户在自然站立且双腿并拢的状态下的图像,来识别用户的人体关键点。根据上述人体关键点,电子设备可以判断用户的姿态是否为具体体态问题的姿态。上述用户识别人体关键点的图像可以包括用户的正面图和侧面图。上述正面图可以是用户面对摄像头时,摄像头采集的图像。上述侧面图可以是用户的身体平面所在的方向与摄像头光轴的方向小于预设角度(如5°等)时,摄像头采集的图像。本申请实施例对上述预设角度的取值不作限定。
例如,电子设备可以根据用户左肩点和右肩点来判断用户是否有高低肩。其中,若左肩点和右肩点之间的高度差大于预设的高低肩阈值(如0.5厘米等),电子设备可以确定用户有高低肩。本申请实施例对上述高低肩阈值不作限定。
电子设备可以根据左脚踝点、右脚踝点之间的距离来判断用户是否有X型腿。其中,若左脚踝点和右脚踝点之间的距离大于预设的X型腿阈值(如1.5厘米等),电子设备可以确定该用户有X型腿。本申请实施例对上述X型腿阈值不作限定。
电子设备可以根据左膝点、右膝点之间的距离来判断用户是否有O型腿。其中,若左膝点和右膝点之间的距离大于预设的O型腿阈值(如2厘米等),电子设备可以确定用户有O型腿。本申请实施例对上述O型腿阈值不作限定。
若用户的左脚踝点和右脚踝点之间的距离小于或等于预设X型腿阈值,且用户的左膝点和右膝点之间的距离小于或等于预设O型腿阈值,电子设备可以根据左膝点与左脚踝点连线得到的线段、右膝点与右脚踝点连线得到的线段之间的距离来判断用户是否有XO型腿。若左膝点与左脚踝点连线得到的线段、右膝点与右脚踝点连线得到的线段之间的最小距离大于预设的XO型腿阈值(如2厘米等),电子设备可以确定用户有XO型腿。本申请实施例对上述XO型腿阈值不作限定。
电子设备可以根据颈部点与腹部点的连线、腹部点与左右髋中间点的连线来判断用户是否有脊柱侧弯。若颈部点与腹部点的连线、腹部点与左右髋中间点的连线之间的夹角小于预设的脊柱侧弯阈值(如175°),电子设备可以确定用户有脊柱侧弯。本申请实施例对上述脊柱侧弯阈值不作限定。
电子设备可以根据用户在侧面图上的人体关键点来判断用户是否驼背。具体的,电子设备可以根据用户的侧面图识别用户的颈部点和背部点。电子设备可以判断颈部点和背部点之间的连线与垂直方向的直线之间的夹角是否大于预设的驼背阈值(如10°等)。若颈部点和背部点之间的连线与垂直方向的直线之间的夹角是否大于驼背阈值,电子设备可以确定用户驼背。本申请实施例对上述驼背阈值不作限定。
但是人体的各个部位并不是一个精确的点。利用人体关键点表示人体的部位,人体关键点的位置可能会在一定范围的区域内浮动。这就导致利用人体关键点对用户的姿态的判断结果存在较大的误差。那么,根据上述关键段确定出的用户的姿态来判断用户是否存在体态问题(即对用户进行体态评估)会有较大的误差。
示例性的,如图3A所示,电子设备利用人体关键点检测模型对包含人体1的图像进行检测,得到人体1的左肩点和右肩点。其中,左肩点可以表示人体1的左侧肩膀的部位。右肩点可以表示人体1的右侧肩膀的部位。图像上人体1的区域1中任意一个点均可以用于表示左肩点,区域2中任意一个点均可以用于表示右肩点。即电子设备识别出来的左肩点的位置可能在区域1中浮动,右肩点的位置可能在区域2中浮动。
例如,电子设备识别得到的人体1的左肩点与右肩点的位置如图3A所示。在图3A中,左肩点与右肩点的高度相同。则电子设备可以确定出人体1没有高低肩。
再例如,电子设备识别得到人体1的左肩点与右肩点的位置如图3B所示。在图3B中,左肩点与右肩点的高度不同。其中,右肩点高于左肩点,右肩点与左肩点的连线与水平线之间的夹角为θ1。则电子设备可以确定出人体1有高低肩,且右肩高于左肩。
由上述实施例可知,电子设备利用人体关键点对同一个用户进行体态评估,有可能会得到不同的结果。有可能用户有体态问题,但由于人体关键点的位置浮动,电子设备可能根据人体关键点确定出用户没有体态问题。有可能用户没有体态问题,但由于人体关键点的位置浮动,电子设备可能根据人体关键点确定出用户有体态问题。上述姿态判断方法带来的误差难以帮助用户准确了解自己的体态并及时矫正体态问题。
本申请提供一种基于图像的姿态判断方法。在该方法中,电子设备可以采集用户的图像,并根据图像确定用户的人体关键点和人体轮廓。电子设备可以结合人体关键点和人体轮廓来判断用户的姿态是否为预设的姿态。相比于仅通过人体关键点进行姿态判断,上述方法可以提高姿态判断的准确性。
上述预设的姿态可以为指定的体态评估姿态。电子设备可以根据人体关键点和人体轮廓上的边缘点确定用户的姿态,并根据用户的姿态与上述体态评估姿态的差别来引导用户完成上述体态评估姿态对应的动作。电子设备可以对包含用户保持上述体态评估姿态的图像进行图像识别,得到用户保持体态评估姿态的人体关键点和人体轮廓。电子设备可以利用用户保持体态评估姿态的人体关键点和人体轮廓,判断用户的姿态是否为具有体态问题的姿态,实现对用户的体态评估。
虽然人体关键点的位置是浮动的,但人体轮廓上的各边缘点的位置是唯一确定的。用户的人体轮廓可以反映出用户的姿态。若用户存在体态问题,用户的姿态通常与正常体态的人体的姿态不同。那么,上述结合人体关键点和人体轮廓进行姿态判断的方法可以提高体态评估结果的准确性。从而帮助用户更好地了解自己的体态并及时矫正体态问题。
可以理解的,上述基于图像确定出用户的人体关键点和人体轮廓来判断用户的姿态是否为具有体态问题的姿态(如具有高低肩的姿态、具有X型腿的姿态等),相当于对用户进行体态评估。即本申请提供的基于图像的姿态判断方法包括体态评估方法。本申请后续实施例中将具体说明体态评估方法的实现过程。
下面介绍本申请实施例涉及的电子设备。
请参照图4,图4示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。
充电管理模块140用于从充电器接收充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后 的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人体关键点检测,人像分割,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器180B可以用于确定电子设备100的运动姿态。气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。
环境光传感器180L用于感知环境光亮度。指纹传感器180H用于采集指纹。温度传感器180J用于检测温度。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。
上述电子设备100可以是手机、电视、平板电脑、笔记本电脑、桌面型计算机、西上行计算机、手持计算机、个人数字助理(personal digital assistant,PDA)、人工智能(artificial intelligence,AI)设备等电子设备。本申请实施例对该电子设备的具体类型不作特殊限制。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图5是本申请实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图5所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图5所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
电子设备通过采集图像对用户进行体态评估,通常需要用户保持自然站立且双腿并拢的姿态。用户在处于上述姿态的情况下可以更好的反映出用户的体态问题。例如,若用户故意倾斜肩膀、故意弯曲双腿,则电子设备难以通过采集的图像确定出用户的真实体态。
那么,在进行体态评估的过程中,电子设备可以先引导用户完成预设的体态评估姿态对应的动作,并采集得到用户保持体态评估姿态时的体态评估图像。电子设备可以利用上述体态评估图像对用户进行体态评估。
请参照图6A,图6A示例性示出了本申请实施例提供的一种电子设备采集体态评估图像的方法流程图。
该方法可包括步骤S610~S640。其中:
S610、采集图像,识别图像中用户的人体关键点和人体轮廓。
在一些实施例中,电子设备100可以接收到进行体态评估的用户操作。当接收到上述用户操作,电子设备100可以采集图像,并识别图像中用户的人体关键点和人体轮廓。
(1)确定图像中进行体态评估的用户。
示例性的,如图6B所示,当采集得到一张图像,电子设备100可以识别这一张图像中进行体态评估的用户。其中,电子设备100可以利用人像检测模型来识别图像中包含的人像,并利用人像选择框确定人像在图像中的区域。人像选择框可以是图像中包含一个人像且面积最小的一个区域。人像选择框的形状可以包括矩形、圆形等形状。本申请实施例对人像选择框的形状不作限定。上述人像检测模型可以是基于神经网络的模型。将一张图像输入人像检测模型,人像检测模型可以输出包含上述人像选择框的图像。本申请实施例对上述人像检测模型的训练方法不作限定。
其中,若一张图像中包含多个人像,人像检测模型可以确定这多个人像在图像中的区域,并分别用不同的人像选择框标识不同的人像。在一种可能的实现方式中,电子设备100可以选择位于图像中央的人像选择框标识的人像进行体态评估。在另一种可能的实现方式中,电子设备100可以询问用户对图像中的哪一个人像进行体态评估。响应于用户选择图像中一个人像的操作,电子设备100可以对用户在图像中选择的人像进行体态评估。
不限于上述人像检测模型,电子设备100还可以通过其它方法确定图像中进行体态评估的用户所在的区域。上述确定图像中进行体态评估的用户所在的区域可以将图像中的干扰物(如背景杂物、路人等)排除,减少上述干扰物对体态评估的影响,提高体态评估结果的准确性。
(2)人体关键点识别,确定用户的人体关键点。
当确定出图像中进行体态评估的用户所在的区域,电子设备100可以进行人体关键点识别,确定用户的人体关键点。其中,电子设备100可以仅对图像中进行体态评估的用户所在的区域(也即人像选择框在图像中确定的区域)进行人体关键点识别。这可以减少人体关键点识别过程中运算的复杂程度,提高计算的效率。在一种可能的实现方式中,电子设备100可以利用人体关键点检测模型来识别人体关键点。人体关键点检测模型可以参考前述实施例的介绍。本申请实施例对电子设备100识别人体关键点的方法不作限定。
经过上述人体关键点识别,电子设备100可以得到用户的各个人体关键点在图像中的位置。人体关键点在图像中的位置具体可以为人体关键点在图像的像素坐标系中的位置。这里以图像的形状为矩形来说明图像的像素坐标系。图像的像素坐标系可以是一个二维直角坐标系。图像的像素坐标系的原点可以位于图像四个顶点中的任意一个顶点,x轴和y轴分别与像面的两边平行。图像的像素坐标系中坐标轴的单位是像素。例如,像素坐标系中的点(1,1)可以表示图像中第一行第一列的像素。
在一种可能的实现方式中,电子设备100可以利用人体关键点检测模型对采集的一张图像进行人体关键点检测,得到一组坐标。这一组坐标可以是用户的所有人体关键点(图2所示的人体关键点)按照指定顺序排列的坐标。这一组坐标可以是人体关键点在像素坐标系中的坐标。例如,这一组坐标中的第一个坐标为头部点在像素坐标系中的坐标,第二个坐标为颈部点在像素坐标系中的坐标。本申请实施例对这一组坐标中坐标所属的人体关键点的排列顺序不作限定。
(3)人像分割,识别用户的人体轮廓。
当确定出图像中进行体态评估的用户所在的区域,电子设备100还可以进行人像分割,识别用户的人体轮廓。在一种可能的实现方式中,电子设备100可以利用人像分割模型来进行人像分割。上述人像分割模型可以是基于神经网络的模型。将一张包含人像的图像输入人像分割模型,人像分割模型可以输出一张二值化的图像。例如,在上述二值化的图像中,人像所在区域的像素值均为255,非人像所在区域的像素值均为0。或者,人像所在区域的像素值均为0,非人像所在区域的像素值均为255。本申请实施例对人像分割模型的训练方法不作限定。
其中,电子设备100可以仅对图像中进行体态评估的用户所在的区域(也即人像选择框在图像中确定的区域)进行人像分割。具体的,电子设备100可以利用人像分割模型对上述人像选择框在图像中确定的区域进行人像分割,得到该区域经过二值化的像素值。电子设备100可以将图像中人像选择框之外的区域的像素值确定为非人体所在区域的像素值(如0)。这可以减少人像分割过程中运算的复杂程度,提高计算的效率。
经过上述人像分割,电子设备100可以将采集得到的原始图像变化为二值化的图像。在该二值化的图像中,人像所在区域的所有像素值均相同,非人像所在区域的所有像素值均相同。但人像所在区域的像素值与非人像所在区域的像素值不同。那么,二值化的图像中像素值出现变化的位置即为人体的边缘点。图像中人体所有的边缘点可以构成人体轮廓。
示例性的,图6C示出了二值化的图像部分区域的像素值。图6C所示的坐标系可以表示图像的像素坐标系。其中,像素值为0的区域可以表示非人像所在的区域。像素值为255的区域可以表示人像所在的区域。从坐标(1,1)到坐标(2,1),像素值从0变化为255。电子设备100可以确定坐标(1,1)和坐标(2,1)之间的位置为非人像所在区域和人像所在区域的分界。同样的,电子设备100可以确定坐标(2,2)和坐标(3,2)之间的位置、坐标(3,3)和坐标(4,3)之间的位置均为非人像所在区域和人像所在区域的分界。其中,电子设备100可以将上述分界处属于人像所在区域的像素确定为人体的边缘点。例如,上述(2,1)、(3,2)、(4,3)均为边缘点。图像中人体所有的边缘点依次连接可以构成人体轮廓。可选的,电子设备100也可以将上述分界处属于非人像所在区域的像素确定为人体的边缘点。例如,上述(1,1)、(2,2)、(3,3)均为边缘点。
在一些实施例中,经过上述人像分割,电子设备100得到的可以是一个数据集合。这一个数据集合包含一张图像中各个位置的像素对应的像素值。这一个数据集合可用于生成上述 二值化的图像。
在一些实施例中,上述人像检测模型、人体关键点检测模型、人像分割模型可以是一个模型。也就是说,将一张图像输入一个模型,这一个模型可以先检测图像中人像所在的区域,然后进行人体关键点检测和人像分割,输出人体关键点在图像中的位置以及二值化的图像(或与二值化的图像对应的数据集合)。
在一些实施例中,上述确定图像中进行体态评估的用户的步骤是可选的。即电子设备100也可以不利用人像检测模型来确定图像中进行体态评估的用户。其中,电子设备100得到一张图像,可以直接对这一张图像进行人体关键点识别以及人像分割。
S620、根据人体关键点和人体轮廓判断用户的姿态是否为预设的体态评估姿态。
上述体态评估姿态可以指便于电子设备100对用户进行体态评估的姿态。在一些实施例中,一些体态问题(如高低肩、X型腿、O型腿等)的检测需要用户的正面图。上述预设的体态评估姿态可以为:面朝摄像头,双臂与身体两侧保持一定距离,双腿自然直立且并拢。上述预设的体态评估姿态还可以为:面朝摄像头,双臂伸直,双臂与身体两侧保持一定距离,双腿自然直立且并拢。在另一些实施例中,一些体态问题(如驼背、颈前伸等)的检测需要用户的侧面图。上述预设的体态评估姿态可以为:身体侧面朝向摄像头,且自然站立。不限于上述列举的体态评估姿态,体态评估姿态还可以为其它。
需要进行说明的是,在上述体态评估姿态中,双腿自然直立可以表示:双膝的膝关节自然伸直,双腿中单条腿的小腿和大腿之间的夹角小于预设的夹角阈值。上述自然站立可以表示:双腿自然直立,躯干挺直,保持放松状态。
可以看出,用于进行体态评估的体态评估图像可以包含用户的正面图和侧面图。
本申请中先对采集用户的正面图,并利用正面图进行体态评估的实现方法进行说明,之后再对利用用户的侧面图进行体态评估的实现方法进行说明。
在一些实施例中,不限于是从上述二值化的图像中确定出人体轮廓,电子设备100也可以利用该二值化的图像和人体关键点来判断用户的姿态是否为预设的体态评估姿态。本申请实施例中具体以利用上述人体轮廓和人体关键点来判断用户的姿态为例进行说明。
在判断用户的姿态是否为采集正面图的体态评估姿态时,电子设备100判断的内容可包括:双臂是否伸直、双臂是否与身体两侧保持一定距离、双腿是否直立、双腿是否并拢。下面结合图6D所示的人体关键点和人体轮廓具体介绍判断用户的姿态是否为体态评估姿态的方法。其中:
(1)双臂是否伸直。
在一种可能的实现方式中,电子设备100可以根据用户手臂上的人体关键点来判断用户的双臂是否伸直。下面以判断右臂是否伸直为例进行说明。
如图6D所示,电子设备100可以通过判断右肩点A1、右肘点A2、右手腕点A3是否在一条直线上,来判断用户的右臂是否伸直。其中,电子设备100可以计算右肩点和右肘点之间的线段A1A2、右肘点和右手腕点之间的线段A2A3、右肩点和右手腕点之间的线段A1A3。在一些实施例中,判断A1、A2和A3是否在一条直线可以通过如下方法:若右肩点A1、右肘点A2、右手腕点A3在一条直线上,线段A1A2的长度(Length(A1A2))与线段A2A3的长度(Length(A2A3))之和为线段A1A3的长度(Length(A1A3))。即Length(A1A2)+Length(A2A3)=Length(A1A3)。那么,电子设备100可以判断Length(A1A2)+Length(A2A3)-Length(A1A3)的值是否小于预设的阈值W1。上述阈值W1的值可以例如是5个像素、6个像素等等。本申请实施例对阈值W1的取值不作限定。若Length(A1A2)+Length (A2A3)-Length(A1A3)小于阈值W1,则电子设备100可以确定右臂是伸直的。若Length(A1A2)+Length(A2A3)-Length(A1A3)大于或等于阈值W1,则电子设备100可以提示用户伸直手臂。
可选的,在其他一些实施例中,电子设备100可以判断以右手肘点A2为夹角顶点,右肩点A1、右肘点A2、右手腕点A3构成的夹角,即∠A1A2A3(后续简称∠A2),与180°的差值是否小于预设的阈值W2。上述阈值W2可以例如是5°、6°等等。本申请实施例对阈值W2的取值不作限定。∠A2的大小可以为α1。若∠A2与180°的差值小于阈值W2,电子设备100可以确定右臂是伸直的。若∠A2与180°的差值大于或等于阈值W2,电子设备100可以提示用户伸直手臂。不限于上述列举的方法,电子设备100还可以通过其它方法来判断用户的右臂是否伸直。
可选的,电子设备100还可以根据人体轮廓上右臂的曲线来判断用户的右臂是否伸直。例如,电子设备100可以根据右肩点、右肘点和右手腕点在人体轮廓上确定出右臂的曲线。电子设备100可以计算上述右臂的曲线的曲率。若上述右臂的曲线的最大曲率小于预设的曲率阈值,电子设备100可以判断出用户的右臂伸直。否则,电子设备100可以判断出用户的右臂未伸直。
可以理解的,判断左臂是否伸直的方法可以与判断右臂是否伸直的方法相同。本申请对判断左臂是否伸直的方法不再展开说明。
(2)双臂是否与身体两侧保持一定距离。
在一种可能的实现方式中,电子设备100可以根据手臂所在的直线与躯干的正中线之间的夹角来判断双臂是否与身体两侧保持一定距离。下面以判断右臂是否与身体右侧保持一定距离为例进行说明。
如图6D所示,右臂所在的直线可以是颈部点A4与右手腕点A3所在的直线A3A4。躯干的正中线可以是颈部点A4与左右髋中间点A5所在的直线A4A5。直线A3A4与直线A4A5之间的夹角为α2。电子设备100可以判断α2是否在预设的阈值范围W3内。上述阈值范围W3可以例如是[30°,50°]、[35°,50°]等等。本申请实施例对阈值范围W3的取值不作限定。若α2在预设的阈值范围W3内,电子设备100可以确定右臂与身体右侧保持一定距离。若α2不在预设的阈值范围W3内,且α2小于阈值范围W3的最小值,电子设备100可以提示用户抬高右臂。若α2不在预设的阈值范围W3内,且α2大于阈值范围W3的最小值,电子设备100可以提示用户放低手臂。
可选的,电子设备100可以仅判断α2是否大于预设的夹角(如30°、35°等)。若α2大于预设的夹角,电子设备100可以判断右臂与身体两侧保持一定距离。否则,电子设备100可以提示用户抬高右臂。
可选的,上述右臂所在的直线可以是右肩点A1与右手腕点A3所在的直线A1A3。电子设备100可以利用直线A1A3和直线A4A5来判断右臂是否与身体右侧保持一定距离。具体的方法可以参考前述实施例的介绍。
可以理解的,判断左臂是否与身体左侧保持一定距离的方法,可以与判断右臂是否与身体右侧保持一定距离的方法相同。本申请对左臂是否与身体左侧保持一定距离的方法不再展开说明。
在另一种可能的实现方式中,电子设备100可以根据用户手臂上的人体关键点以及人体轮廓上的边缘点,来判断用户的双臂是否与身体两侧保持一定距离。
如图6D所示,电子设备100可以计算右手腕点A3和左手腕点A12所在的直线A3A12 与人体轮廓的交点的数量。上述交点也是人体轮廓上的边缘点。若用户的双臂与身体两侧保持一定距离,直线A3A12与人体轮廓的交点有6个。这6个交点包括:左臂轮廓上的两个边缘点、腰部轮廓上的两个边缘点、右臂轮廓上的两个边缘点。若用户的双臂紧贴身体两侧,则手臂与腰部之间不存在边缘点,直线A3A12与人体轮廓的交点少于6个。电子设备100可以判断上述交点的数量是否有6个。若上述交点的数量有6个,电子设备100可以确定用户的双臂与身体两侧保持一定距离。若上述交点的数量少于6个,电子设备100可以提示用户抬高手臂。
在体态评估姿态中,双臂与身体两侧保持一定距离可以使得电子设备100能够在进行人像分割时分离出人体的四肢和躯干,得到腰部两侧的边缘点,从而可以更好地进行体态评估。
(3)双腿是否直立。
在一种可能的实现方式中,电子设备100可以根据用户腿部上的关键点来判断用户的双腿是否伸直。下面以判断右腿是否伸直为例进行说明。
如图6D所示,电子设备100可以计算右髋点A6和右膝点A7之间的线段A6A7、右膝点A7和右脚踝点A8之间的线段A7A8、右髋点A6和右脚踝点A8之间的线段A6A8。电子设备100可以判断线段A6A7的长度(Length(A6A7))与线段A7A8的长度(Length(A7A8))之和减线段A6A8(Length(A6A8))的长度得到的差值(即Length(A6A7+Length(A7A8)-Length(A6A8)),是否小于预设的阈值W4。上述阈值W4的值可以例如是5个像素、6个像素等等。本申请实施例对阈值W4的取值不作限定。若Length(A6A7+Length(A7A8)-Length(A6A8)小于阈值W4,电子设备100可以确定用户的双腿直立。若Length(A6A7+Length(A7A8)-Length(A6A8)大于阈值W4,电子设备100可以提示用户将双腿直立。
可以理解的,判断左腿是否直立的方法可以与判断右腿是否直立的方法相同。本申请对判断左腿是否直立的方法不再展开说明。
(4)双腿是否并拢。
在一种可能的实现方式中,电子设备100可以根据用户腿部的人体关键点以及人体轮廓上的边缘点,来判断用户的双腿是否靠拢。
如图6D所示,电子设备100可以确定右膝点A7和左膝点A9所在的直线A7A9与人体轮廓上双腿内侧的交点。例如,直线A7A9与人体轮廓上左腿内侧的交点为B1,与人体轮廓上右腿内侧的交点为B2。电子设备100可以确定右脚踝点A8和左脚踝点A10所在的直线A8A10与人体轮廓上双腿内侧的交点。例如,直线A8A10与人体轮廓上左腿内侧的交点为B3,与人体轮廓上右腿内侧的交点为B4。上述交点B1、B2、B3、B4均为人体轮廓上的边缘点。
由前述对不同腿型在双腿并拢状态下的情况可知,当双腿并拢,双膝和双足内脚踝中至少有一个部位可以贴近。那么,电子设备100可以判断交点B1、B2之间的线段B1B2,以及交点B3、B4之间的线段B3B4的长度。若线段B1B2和线段B3B4中至少一条线段的长度小于预设的阈值W5,电子设备100可以确定用户的双腿并拢。上述阈值W5可以例如是5个像素、6个像素等。本申请实施例对阈值W5的取值不作限定。若线段B1B2和线段B3B4的长度均大于或等于阈值W5,电子设备100可以提示用户将双腿并拢。
其中,双膝(或双足内脚踝)贴近可以包括双膝(或双足内脚踝)相碰。例如,当双膝相碰,电子设备100在进行人像分割得到的人体轮廓在双腿内侧膝关节的位置处可能没有边缘点。即上述直线A7A9与人体轮廓上的双腿内侧没有交点。同理,当双足内脚踝相碰,上 述直线A8A10与人体轮廓上的双腿内侧没有交点。那么,若直线A7A9和直线A8A10中至少一条直线与人体轮廓上的双腿内侧没有交点,电子设备100可以确定用户的双腿并拢。
在一些实施例中,电子设备100还可以判断采集得到的图像是否是用户的正面图。例如,电子设备100可以判断用户是否面朝摄像头。在一种可能的实现方式中,电子设备100可以通过检测图像中的人脸区域来判断用户是否面朝摄像头。若图像中进行体态评估的用户的人脸包含左眼、右眼、嘴唇等人脸关键点,电子设备可以确定用户面朝摄像头。否则,电子设备100可以提示用户调整站立的朝向,引导用户面朝摄像头站立。例如,当人脸区域包含左眼而不包含右眼,电子设备100可以提示用户向左转。可选的,电子设备100还可以判断图像中进行体态评估的用户的人脸上左眼与右眼之间的距离是否小于眼间距阈值。若左眼与右眼之间的距离小于眼间距阈值,电子设备100可以确定用户未面朝摄像头站立。电子设备100可以提示用户调整站立的朝向,引导用户面朝摄像头站立。可选的,电子设备100还可以根据用户的左肩点和右肩点来判断用户是否面朝摄像头站立。例如,电子设备100可以判断左肩点和右肩点之间的距离是否小于肩宽阈值。若左肩点和右肩点之间的距离小于肩宽阈值,电子设备100可以确定用户未面朝摄像头站立,进而提示用户调整站立的朝向,引导用户面朝摄像头站立。本申请实施例对判断用户是否面朝摄像头的方法不作限定。
上述判断用户的双臂是否伸直是可选的,也就是说,在一些实施例中,在判断用户的姿态是否为预设的体态评估姿态时,电子设备100可以仅判断用户的双臂是否与身体两侧保持一定距离、双腿是否直立、双腿是否并拢。
可以理解的,为了判断用户的一些体态问题(如高低肩、X型腿、O型腿等),电子设备100可以判断用户的姿态是否满足面朝摄像头,双臂是否与身体两侧保持一定距离,双腿是否直立且并拢。本申请实施例提供的姿态判断方法可以不限于仅判断用户的姿态是否为预设的体态评估姿态。例如,电子设备100可以利用人体关键点和人体轮廓仅判断用户的双臂是否与身体两侧保持一定距离。再例如,电子设备100可以利用人体关键点和人体轮廓仅判断用户的双腿是否直立且并拢。上述结合人体关键点和人体轮廓的姿态判断方法可以提高姿态判断的准确性。
S630、若用户的姿态为预设的体态评估姿态,以包含用户保持体态评估姿态的图像为体态评估图像,体态评估图像用于进行体态评估。
在一种可能的实现方式中,当确定出用户的姿态为预设的体态评估姿态,电子设备100可以提示用户保持该体态评估姿态,并利用摄像头采集图像。上述提示用户保持该体态评估姿态之后采集的图像可以为体态评估图像。
在另一种可能的实现方式中,若电子设备100利用上述步骤S610采集的图像确定出用户的姿态为预设的体态评估姿态,电子设备100可以以上述确定出用户的姿态为体态评估姿态的图像为体态评估图像。这样,在确定出用户的姿态为预设的体态评估姿态之后,电子设备100可以不用再次采集图像。
S640、若用户的姿态不为预设的体态评估姿态,根据用户的姿态与体态评估姿态的差别,提示用户调整姿态。
图7A~图7D示例性示出了电子设备100提示用户调整姿态的场景示意图。
在一些实施例中,当接收到进行体态评估的用户操作,电子设备100可以显示图7A所示的用户界面710。用户界面710可包括显示区域711和显示区域712。其中:
显示区域711可用于显示体态评估姿态的示例图711A,以及动作提示711B。例如,动作提示711B的内容可以为:请伸直手臂,将手臂垂下与身体两侧保持一定距离,并将双腿并 拢。这样,用户可以根据示例图711A和动作提示711B来调整自己的姿态,以达到预设的体态评估姿态。
显示区域712可用于显示电子设备100通过摄像头193采集的图像。该图像可包括进行体态评估的用户。
由图7A所示的显示区域712可以看出,用户的姿态与体态评估姿态不同。其中,用户的双臂未伸直,且双腿未并拢。电子设备100可以根据上述步骤S620中的方法确定用户的姿态与体态评估姿态的差别,在显示区域711中显示动作提示711B。其中,电子设备100还可以语音播报动作提示711B中的内容,以引导用户将姿态调整为体态评估姿态。
在得到体态评估图像之前,电子设备100可以持续采集用户的图像,并判断用户的姿态是否为体态评估姿态。其中,电子设备100可以每隔预设时间(如1秒、2秒等)对用户进行图像采集。若利用第一次采集得到的图像判断出用户的姿态不为体态评估姿态,电子设备可以再次(即第二次)采集用户的图像,并判断用户的姿态是否为体态评估姿态。也就是说,电子设备100可以循环执行步骤S610、步骤S620、步骤S640,直至确定出用户的姿态为体态评估姿态。
示例性的,用户根据图7A所示的动作提示711B调整姿态。电子设备100可以显示图7B所示的用户界面710。由图7B所示的显示区域712可以看出,用户将姿态从图7A所示的弯曲双臂调整为伸直双臂,且将双臂贴近身体两侧。电子设备100根据上述步骤S620中的方法确定出用户的双臂未与身体两侧保持一定距离,且双腿未并拢。即用户的姿态仍不为体态评估姿态。电子设备100可以显示图7B所示的动作提示711C。该动作提示711C的内容可以为:请将手臂上台与身体两侧保持一定距离,并将双腿并拢。电子设备100还可以语音播报动作提示711C的内容。
示例性的,用户根据动作提示711C调整姿态。电子设备100可以显示图7C所示的用户界面710。由图7C所示的显示区域712可以看出,用户将姿态从图7B所示的双臂贴近身体两侧调整为伸直双臂且双臂与身体保持一定距离。但用户的双腿仍未并拢。即用户的姿态仍不为体态评估姿态。电子设备100可以显示图7C所示的动作提示711D。该动作提示711D的内容可以为:请将双腿并拢。电子设备100还可以语音播放动作提示711D的内容。
示例性的,用户根据动作提示711D调整姿态。电子设备100可以显示图7D所示的用户界面710。由图7D所示的显示区域712可以看出,用户将姿态从图7C所示的双腿张开调整为双腿并拢。电子设备100根据上述步骤S620中的方法确定出用户的姿态为体态评估姿态。电子设备100可以显示图7D所示的动作提示711E,来提示用户保持体态评估姿态。动作提示711E的内容可以为:请保持该姿态,即将拍照。电子设备100还可以语音播报动作提示711E的内容。当提示用户保持体态评估姿态之后,电子设备100可以采集图像并将采集得到的图像作为体态评估图像。
由上述图7A~图7D所示的实施例可知,电子设备100提示用户调整姿态的提示内容可以随用户姿态的变化而变化。电子设备100可以针对用户的姿态与体态评估姿态不同的地方对用户进行提示。这可以让用户了解自己的哪个部位与体态评估姿态不同,从而更好地引导用户完成体态评估姿态对应的动作。
上述图7A~图7D所示的用户界面710仅为示例性说明,不应对本申请构成限定。在一些实施例中,电子设备100也可仅在显示屏上显示上述显示区域711。即电子设备100可以不在显示屏上显示用户的图像。
在一些实施例中,上述体态评估图像也可以是其它电子设备,例如电子设备200,采集 的。电子设备200与电子设备100之间建立有通信连接。电子设备200得到体态评估图像之后,可以将体态评估图像发送给电子设备100。电子设备100可以利用体态评估图像进行体态评估。
在本申请提供的体态评估方法中,电子设备100可以利用人体关键点在人体轮廓上确定一个或多个边缘点。这一个或多个边缘点可用于判断用户的姿态是否为预设的体态评估姿态,还可用于进行体态评估。
下面具体介绍本申请提供的利用人体关键点在人体轮廓上确定一个或多个边缘点的方法。
由前述人体关键点识别的实施例可知,电子设备100可以得到图像中用户的各人体关键点在像素坐标系中的坐标。由前述人像分割的实施例可知,电子设备100可以得到一个数据集合。这一个数据集合可包含在像素坐标系中,图像上各个位置的像素对应的像素值。其中,图像中用户的人像所在的区域的像素值均相同(例如为255)。图像中用户的人像不在的区域的像素值均相同(例如为0)。图像中用户的人像所在的区域的像素值和人像不在的区域的像素值不同。
在一种可能的实现方式中,电子设备100可以根据上述数据集合确定出人体轮廓上的所有边缘点。具体方法可以参考前述图6C所示实施例的介绍。电子设备100可以利用人体关键点所在的直线,在人体轮廓上确定出一个或多个边缘点。以人体关键点A为例进行说明。关键点A可以是前述图2所示关键点中的任意一个关键点。具体的,电子设备100根据人体关键点A在像素坐标系中的位置确定出人体关键点A所在的直线L1在像素坐标系中的表达式。那么,电子设备100可以判断直线L1上哪些像素是人体轮廓上的边缘点。这样,电子设备100可以得到直线L1与人体轮廓的交点在像素坐标系中的位置。上述交点即为电子设备100利用人体关键点A在人体轮廓上确定出的边缘点。
在另一种可能的实现方式中,电子设备100可以根据人体关键点A在像素坐标系中的位置确定出人体关键点A所在的直线L1在像素坐标系中的表达式。进一步地,电子设备100可以判断直线L1上哪些像素是上述数据集合中像素值出现变化(如,从0变化为255、从255变化为0)的位置所在的像素。电子设备100可以将直线L1上为上述数据集合中像素值出现变化的位置的像素确定为边缘点。上述边缘点相当于直线L1与人体轮廓的交点,也即人体关键点A在人体轮廓上确定出的边缘点。这样,电子设备100可以得到人体关键点A在人体轮廓上确定出的边缘点在像素坐标系中的位置。
本申请实施例对电子设备利用人体关键点在人体轮廓上确定出边缘点的实现方法不作限定。
本申请实施例中人体关键点所在直线与人体轮廓的交点的位置可以通过上述实现方法确定。后续实施例中对此不再赘述。
下面具体介绍电子设备100利用体态评估图像来评估用户是否有高低肩、X型腿、O型腿、XO型腿以及脊柱侧弯的实现方法。
(1)高低肩
在一种可能的实现方式中,电子设备100可以利用左肩点和右肩点在人体轮廓上确定出的边缘点来判断用户是否有高低肩。
示例性的,如图8A所示,电子设备100可以确定出经过右肩点,且与像素坐标系的x轴垂直的直线L2。直线L2与人体轮廓上右肩部位的交点为C1。上述交点C1即为直线L2 上在像素坐标系中的y值大于右肩点的y值,且与右肩点最接近的一个边缘点。电子设备100可以确定出经过左肩点,且与像素坐标系的x轴垂直的直线L3。直线L3与人体轮廓上左肩部位的交点为C2。上述交点C2即为直线L1上在像素坐标系中的y值大于左肩点的y值,且与左肩点最接近的一个边缘点。
电子设备100可以判断交点C1和交点C2所在的直线C1C2与水平方向的直线(即与像素坐标系的y轴垂直的直线)之间的夹角是否小于预设的阈值W6。上述阈值W6可以例如是5°、6°等等。本申请实施例对阈值W6的取值不作限定。直线C1C2与水平方向的直线之间的夹角为θ2。其中,若交点C1在像素坐标系的位置为(Xc1,Yc1),交点C2在像素坐标系的位置为(Xc2,Yc2),则θ2=arctan[(Yc2-Yc1)/(Xc2-Xc1)]。若θ2的绝对值小于上述阈值W6,电子设备100可以确定用户没有高低肩。若θ2的绝对值大于或等于上述阈值W6,电子设备100可以确定用户有高低肩,且可以根据θ2的正负确定用户是左肩高还是右肩高。例如,θ2为负可以表示用户的右肩高于左肩。θ2为正可以表示用户的左肩高于右肩。电子设备100还可以根据θ2的绝对值的大小确定用户高低肩的严重程度。θ2的绝对值越大,用户高低肩的体态问题越严重。
或者,电子设备100可以比较交点C1和交点C2在像素坐标系中的位置在y轴上的取值。例如,交点C1在像素坐标系的位置为(Xc1,Yc1)。交点C2在像素坐标系的位置为(Xc2,Yc2)。电子设备100可以判断|Yc1-Yc2|是否小于预设的阈值W7。|Yc1-Yc2|可以表示Yc1-Yc2的绝对值。上述阈值W7可以例如是5个像素、6个像素等等。本申请实施例对阈值W7的取值不作限定。若|Yc1-Yc2|小于阈值W7,电子设备100可以确定用户没有高低肩。若|Yc1-Yc2|大于或等于阈值W7,电子设备100可以确定用户有高低肩。其中,电子设备100可以根据Yc1和Yc2中哪一个更大来确定用户是左肩高还是右肩高。例如,若Yc1大于Yc2,则用户的右肩高于左肩。反之,则用户的左肩高于右肩。电子设备100还可以根据|Yc1-Yc2|的大小来确定用户高低肩的严重程度。|Yc1-Yc2|越大,用户高低肩的体态问题越严重。
在一些实施例中,电子设备100可以在高低肩的评估结果中显示图8B所示的高低肩评估结果示意图。在高低肩评估结果示意图中,电子设备100可以在用户的人体图像上显示上述交点C1、交点C2和直线C1C2。直线C1C2可以表示用户双肩的方向。电子设备100还可以在用户的人体图像中显示一条水平方向的直线。这一条水平方向的直线可以表示正常体态(即没有高低肩)的人体双肩的方向。上述直线C1C2与上述水平方向的直线可以方便用户直观地查看自己是否具有高低肩以及高低肩的严重程度。
由上述实施例可知,电子设备100可以根据人体关键点在人体轮廓的肩部确定边缘点,并利用该边缘点来比较进行体态评估的用户双肩的方向与正常体态的人体双肩的方向,得到用户的高低肩评估结果。由于边缘点的位置是确定的,上述方法可以减少检测到的人体关键点的位置浮动对高低肩评估结果的影响,提高高低肩评估结果的准确率。
(2)X型腿、O型腿、XO型腿
在一种可能的实现方式中,电子设备100可以利用腿部的关键点在人体轮廓上确定出的边缘点来判断用户的腿型。
示例性的,如图8C所示,电子设备100可以确定出右膝点、左膝点所在的直线L4。直线L4与人体轮廓上左腿内侧的交点为B1,与人体轮廓上右腿内侧的交点为B2。交点B1即为直线L4上在像素坐标系中的x值小于左膝点的x值,大于右膝点的x值,且与左膝点最接近的一个边缘点。交点B2即为直线L4上在像素坐标系中x值小于左膝点的x值,大于右膝 点的x值,且与右膝点最接近的一个边缘点。电子设备100可以确定出右脚踝点、左脚踝点所在的直线L5。直线L5与人体轮廓上左腿内侧的交点为B3,与人体轮廓上右腿内侧的交点为B4。交点B3即为直线L5上在像素坐标系中x值小于左脚踝点的x值,大于右脚踝点的x值,且与左脚踝点最接近的一个边缘点。交点B4即为直线L5上在像素坐标系中x值小于左脚踝点的x值,大于右脚踝点的x值,且与右脚踝点最接近的一个边缘点。
电子设备100可以计算交点B1和交点B2之间的线段B1B2的长度len(B1B2)、交点B3和交点B4之间的线段B3B4的长度len(B3B4)、交点B1和交点B3之间的线段B1B3的长度len(B1B3)。
在一些实施例中,若体态评估图像中,用户的双膝相碰使得直线L4与人体轮廓上双腿内侧没有交点,则上述len(B1B2)的值为0。若体态评估图像中,用户的双足内脚踝相碰使得直线L5与人体轮廓上双腿内侧没有交点,则上述len(B3B4)的值为0。
电子设备100可以判断len(B1B2)/len(B1B3)的值是否大于预设的阈值W8。上述阈值W8可以例如是0.1、0.15等等。本申请实施例对阈值W8的取值不作限定。若len(B1B2)/len(B1B3)大于阈值W8,电子设备100可以确定出用户的腿型为O型腿。可以理解的,len(B1B2)/len(B1B3)大于阈值W8可以表示在双腿自然直立且并拢的情况下,用户的双膝分离而不能贴近。即用户的腿型为O型腿。len(B1B2)/len(B1B3)越大,用户O型腿的程度越严重。len(B1B2)/len(B1B3)小于或等于阈值W8可以表示在双腿自然直立且并拢的情况下,用户的双膝能够贴近。即用户的腿型不为O型腿。
若上述len(B1B2)/len(B1B3)小于或等于阈值W8,电子设备100可以判断len(B3B4)/len(B1B3)是否大于阈值W9。上述阈值W9可以例如是0.1、0.15等等。本申请实施例对阈值W9的取值不作限定。若len(B3B4)/len(B1B3)大于阈值W9,电子设备100可以确定出用户的腿型为X型腿。可以理解的,len(B3B4)/len(B1B3)大于阈值W9可以表示在双腿自然直立且并拢的情况下,用户的双足内脚踝分离而不能贴近。即用户的腿型为X型腿。len(B3B4)/len(B1B3)越大,用户X型腿的程度越严重。len(B3B4)/len(B1B3)小于或等于阈值W9可以表示在双腿自然直立且并拢的情况下,用户的双足内脚踝能够贴近。即用户的腿型不为X型腿。
在双腿自然直立且并拢的情况下,若用户的双膝能够贴近,且双足内脚踝能够贴近,则用户的腿型可能是正常的,也可能是XO型腿。进一步的,若len(B3B4)/len(B1B3)小于或等于阈值W9,电子设备100可以判断用户的左小腿和右小腿是否分离而不能贴近。具体的,电子设备100可以将左膝点与左脚踝点之间的线段三等分,并确定该线段的三等分点(如图8C所示的三等分点B5和三等分点B7)。电子设备100可以将右膝点与右脚踝点之间的线段三等分,并确定该线段中的三等分点(如图8C所示的三等分点B6和三等分点B8)。电子设备100可以确定三等分点B5和三等分点B7所在的直线L6,以及三等分点B6和三等分点B8所在的直线L7。
电子设备100可以确定在直线L6和直线L7之间,人体轮廓上左腿内侧的边缘点组A,以及人体轮廓上右腿内侧的边缘点组B。上述边缘点组A和边缘点组B均可包含多个边缘点。电子设备100可以从边缘点组A和边缘点组B中选取在像素坐标系中的y值相同的一对或多对边缘点。其中,一对边缘点包含一个来自边缘点组A的边缘点和一个来自边缘点组B的边缘点。一对边缘点中的两个边缘点在像素坐标系中的y值相同。电子设备100可以计算一对边缘点中的两个边缘点之间的距离,该距离可记为len(一对边缘点)。电子设备100可以判断len(一对边缘点)/len(B1B3)是否大于预设的阈值W10。上述阈值W10可以例如是0.1、 0.2等等。本申请实施例对阈值W10的取值不作限定。若上述多对边缘点均满足len(一对边缘点)/len(B1B3)大于阈值W10,电子设备100可以确定用户的左小腿和右小腿分离而不能贴近。即用户的腿型为XO型腿。否则,电子设备100可以确定用户的腿型为正常腿型。其中,由上述多对边缘点的len(一对边缘点)/len(B1B3)得到的均值越大,用户XO型腿的程度越严重。
不限于将上述左膝点与左脚踝点之间的线段、右膝点与右脚踝点之间的线段三等分来选取人体轮廓上的边缘点。电子设备100还可以将左膝点与左脚踝点之间的线段、右膝点与右脚踝点之间的线段分为更多等分(如四等分、五等分)来选人体轮廓上的边缘点,实现判断用户的左小腿或右小腿是否分离而不能贴近。或者,电子设备100还可以将左膝点与左脚踝点之间的线段、右膝点与右脚踝点之间的线段二等分。电子设备100可以从人体轮廓上双腿内侧的边缘点中,选取在像素坐标系的y值与二等分点的y值相差预设差值的边缘点,并利用选取的边缘点判断用户的左小腿或右小腿是否分离而不能贴近。本申请实施例对选取用于判断用户的左小腿或右小腿是否分离而不能贴近的边缘点的方法不作限定。
在一些实施例中,除了上述交点B1和交点B2,电子设备100还可以在用户的双膝内侧的边缘点中选取多对边缘来判断用户的双膝是否贴近。例如,电子设备100可以直线L4向像素坐标系的y轴的正方向上平移Δy个像素。电子设备100可以确定平移直线L4得到的直线与人体轮廓上双腿内侧的两个边缘点,并判断这两个边缘点之间的距离比len(B1B3)的比值是否大于上述阈值W8。若双膝内侧的边缘点中的多对边缘均满足:一对边缘点包含的两个边缘点之间的距离比len(B1B3)的比值是否大于上述阈值W8,电子设备100可以确定用户的腿型为O型腿。
同样的,除了上述交点B3和交点B4,电子设备100还可以在用户的双足内脚踝的边缘点中选取多对边缘点来判断用户的双足内脚踝是否贴近。
上述选取多对边缘点来判断用户腿型的方法可以减少人像分割过程的误差、用户所穿衣物等因素对腿型评估结果的影响,提高腿型评估结果的准确率。
上述评估用户腿型的方法中,对双膝内侧的边缘点之间的距离、双足内脚踝的边缘点之间距离左小腿和右小腿内侧的边缘点之间的距离进行了归一化处理。即通过上述距离与len(B1B3)的比值来判断用户的腿型。上述归一化处理可以使得上述评估腿型的方法适用于不同的用户。通常的,若两个用户的腿型为同等程度的O型腿,在双腿自然直立且并拢的情况下,腿越长的用户,双膝之间的距离越大。若两个用户的腿型为同等程度的X型腿,在双腿自然直立且并拢的情况下,腿越长的用户,双足内脚踝之间的距离越大。若两个用户的腿型为同等程度的XO型腿,在双腿自然直立且并拢的情况下,腿越长的用户,左小腿与右小腿之间的距离越大。
不限于通过计算双膝内侧的边缘点之间的距离、双足内脚踝的边缘点之间的距离、左小腿和右小腿内侧的边缘点之间的距离中任意一个距离与len(B1B3)的比值来进行归一化处理,电子设备100还可以利用人体轮廓上右腿内侧的交点B2和交点B4之间的距离来进行归一化处理。本申请实施例对上述归一化处理的具体方法不作限定。
在一些实施例中,电子设备100可以先通过判断len(B3B4)/len(B1B3)是否大于上述阈值W9,来判断用户是否为X型腿。若用户不为X型腿(即len(B3B4)/len(B1B3)小于或等于阈值W9),电子设备100可以进一步通过判断len(B1B2)/len(B1B3)是否大于上述阈值W8,来判断用户是否为O型腿。也就是说,在评估用户腿型的过程中,本申请实施例对判断X型腿、O型腿、XO型腿的顺序不作限定。
在一些实施例中,电子设备100可以在腿型的评估结果中显示图8D所示的腿型评估结果示意图。在腿型评估结果示意图中,电子设备100可以在用户的人体图像上显示用户的左膝点、右膝点、左脚踝点、右脚踝点等人体关键点。电子设备100还可以在用户的人体图像中显示用于表示用户的双腿形状的线条。上述线条可以是通过用户腿部的人体关键点拟合得到的,或者可以是通过人体轮廓上双腿的边缘点拟合得到的。通过上述腿部的人体关键点和表示用户的双腿形状的线条,用户可以直观地了解自己的腿型。本申请实施例对腿型评估结果示意图的展示形式不作限定。腿型评估结果示意图上还可以包含更多或更少的标记。
由上述实施例可知,电子设备100可以根据人体关键点和人体轮廓确定双膝内侧的边缘点之间的距离、双足内脚踝的边缘点之间的距离、左小腿和右小腿内侧的边缘点之间的距离。并利用上述距离来比较进行体态评估的用户的腿型与正常体态的人体的腿型,得到用户的腿型评估结果。由于边缘点的位置是确定的,人体轮廓上左腿与右腿处于同一高度的边缘点之间的距离可以更好地反映用户的腿型。这可以减少在仅利用人体关键点进行腿型评估时,人体关键点的位置浮动对腿型评估结果的影响,提高腿型评估结果的准确率。
(3)脊柱侧弯
在一种可能的实现方式中,电子设备100可以利用躯干上的关键点来判断用户是否有脊柱侧弯。
示例性的,如图8E所示,电子设备100可以确定出右肩点与右髋点之间的线段L8,以及左肩点与左髋点之间的线段L9。电子设备100可以计算得到线段L8的长度为len(L8),以及线段L9的长度为len(L9)。电子设备100可以判断min[len(L8),len(L9)]/max[len(L8),len(L9)]是否小于预设的阈值W11。上述阈值W11可以是小于或等于1,且与1接近的值,例如,0.95、0.9等等。本申请实施例对阈值W11的取值不作其它限定。上述min[len(L8),len(L9)]可以表示选取len(L8),len(L9)中最小的一个值。上述max[len(L8),len(L9)]可以表示选取len(L8),len(L9)中最大的一个值。
可以理解的,若用户没有脊柱侧弯,len(L8)与len(L9)的长度应当相同或者差距特别小。那么,若min[len(L8),len(L9)]/max[len(L8),len(L9)]小于上述阈值W11,电子设备100可以确定用户有脊柱侧弯。若min[len(L8),len(L9)]/max[len(L8),len(L9)]大于或等于阈值W11,电子设备100可以确定用户没有脊柱侧弯。
在另一种可能的实现方式中,电子设备100可以利用躯干上的关键点在人体轮廓上确定出的边缘点来判断用户是否有脊柱侧弯。
示例性的,如图8F所示,电子设备100可以确定出经过右肩点,且与像素坐标系的x轴垂直的直线L10。直线L10与人体轮廓上右肩部位的交点为F1。电子设备100可以确定出经过左肩点,且与像素坐标系的x轴垂直的直线L11。直线L11与人体轮廓上左肩部位的交点为F2。电子设备100可以确定出经过右髋点,且与像素坐标系的y轴垂直的直线L12。直线L12与人体轮廓上右侧腰部的交点为F3。交点F3即为直线L12上在像素坐标系中的x值小于右髋点的x值,且与右髋点最接近的一个边缘点。电子设备100可以确定出经过左髋点,且与像素坐标系的y轴垂直的直线L13。直线L13与人体轮廓上左侧腰部的交点为F4。交点F4即为直线L13上在像素坐标系中的x值大于左髋点的x值,且与左髋点最接近的一个边缘点。
电子设备100可以计算交点F1与交点F3之间的线段F1F3的长度,得到len(F1F3)。电子设备100可以计算交点F2和交点F4之间的线段F2F4的长度,得到len(F2F4)。电子 设备100可以判断min[len(F1F3),len(F2F4)]/max[len(F1F3),len(F2F4)]是否小于预设的上述阈值W11。若min[len(F1F3),len(F2F4)]/max[len(F1F3),len(F2F4)]小于上述阈值W11,电子设备100可以确定用户有脊柱侧弯。若min[len(F1F3),len(F2F4)]/max[len(F1F3),len(F2F4)]大于或等于阈值W11,电子设备100可以确定用户没有脊柱侧弯。
在一些实施例中,电子设备100可以根据人体关键点以及人体轮廓来确定脊柱侧弯评估结果示意图。
图8G~图8I示例性示出了电子设备100确定脊柱侧弯评估结果示意图的过程。
如图8G所示,电子设备100可以将颈部点与腹部点之间的线段三等分,并从线段的三等分点中确定与腹部点最接近的三等分点E1。电子设备100可以确定出经过三等分点E1,且与像素坐标系的y轴垂直的直线L14。直线L14与人体轮廓上左侧腰部的交点为E2。交点E2即为直线L14上在像素坐标系中的x值大于三等分点E1的x值,且与三等分点E1最接近的一个边缘点。电子设备100可以确定出经过左右髋中间点,且与像素坐标系的y轴垂直的直线L15。直线L15与人体轮廓上左侧腰部的交点为E3。交点E3即为直线L15上在像素坐标系中的x值大于左右髋中间点的x值,且与左右髋中间点最接近的一个边缘点。
电子设备100可以将交点E2和交点E3之间的线段E2E3向左平移至交点E3与左右髋中间点重合的位置。交点E2经过上述平移之后的位置为直线L14上的点E4。这样,电子设备100可以得到图8H所示颈部点与点E4之间的线段L16,以及点E4与左右髋中间点之间的线段L17。
在一种可能的实现方式中,电子设备100可以利用线段L16上的像素以及线段L17上的像素进行曲线拟合,得到图8I所示颈部点和左右髋中间点之间的曲线段L18。上述曲线拟合的方法可以把包括最小二乘法曲线拟合、三次曲线拟合等等。本申请实施例对上述曲线拟合的方法不作限定。上述曲线段L18可以表示用户的脊椎的形状。其中,曲线段L18的曲率可以表示用户脊柱侧弯的严重程度。例如,电子设备100可以计算曲线段L18的最大曲率。曲线段L18的最大曲率越大,用户脊柱侧弯的程度越严重。本申请实施例对电子设备100判断用户脊柱侧弯的严重程度的方法不作限定。
电子设备100可以在脊柱侧弯的评估结果中显示图8I所示的脊柱侧弯评估结果示意图。在脊柱侧弯评估结果示意图中,电子设备100可以在用户的人体图像上显示上述拟合得到的曲线段L18。这样用户可以通过曲线段L18直观地查看自己是否具有脊柱侧弯以及脊柱侧弯的严重程度。
由上述实施例可知,电子设备100可以利用用户躯干上的人体关键点,以及根据人体关键点在人体轮廓上确定出的边缘点来拟合用户脊椎的形状。这样,电子设备100可以在脊柱侧弯评估结果示意图中显示用于表示用户脊椎的形状的曲线,方便用户了解自己是否具有脊柱侧弯以及脊柱侧弯的严重程度。
除了采集用户的正面图对用户进行体态评估,电子设备100还可以采集用户的侧面图,并以该侧面图为体态评估图像对用户进行体态评估。其中,为了进行体态评估,采集用户的正面图要求用户保持的体态评估姿态可以与采集用户的侧面图要求用户保持的体态评估姿态不同。
下面具体介绍电子设备100采集用户的侧面图,并利用该侧面图进行体态评估的实现方法。
在一种可能的实现方式中,电子设备100可以引导用户完成采集用户的侧面图要求用户 保持的体态评估姿态对应的动作。这里以体态评估姿态为身体侧面朝向摄像头,且自然站立的姿态作为示例进行说明。其中,电子设备100得到体态评估图像的方法可以参考前述图6A所示的方法流程图。即电子设备100可以持续采集用户的图像,并根据图像中用户的人体关键点和人体轮廓来判断用户的姿态是否与体态评估姿态相同。电子设备100可以提示用户调整姿态,直至用户的姿态与体态评估姿态相同,从而得到体态评估图像。
这里介绍本申请实施例提供的另一种判断用户的姿态与体态评估姿态是否相同的实现方法。
如图9所示,电子设备100可以识别图像中用户的人体关键点。电子设备100可以根据人体关键点判断用户的姿态是否为站立姿态。另外,电子设备100还可以利用左髋点和右髋点来判断用户是否身体侧面朝向摄像头。可以理解的,若用户的身体侧面朝向摄像头,图像中用户的左髋点和右髋点之间的距离较小。电子设备100可以根据左髋点和右髋点在像素坐标系中的位置确定左髋点和右髋点之间的距离。
若左髋点和右髋点之间的距离小于预设的阈值W12,电子设备100可以确定用户的身体侧面朝向摄像头。那么,电子设备100可以以判断出左髋点和右髋点之间的距离小于阈值W12的图像作为体态评估图像。或者,在判断出左髋点和右髋点之间的距离小于阈值W12之后,电子设备100可以提示用户保持姿态不变,采集图像,并将该图像作为体态评估图像。上述阈值W12可以例如是5个像素、6个像素等等。本申请实施例对阈值W12的取值不作限定。
若左髋点和右髋点之间的距离大于或等于阈值W12,电子设备100可以确定用户的姿态不为预设的体态评估姿态,并根据用户的姿态与体态评估姿态的差别,提示用户调整体态。在一种可能的实现方式中,电子设备100可以判断在像素坐标系中左髋点和右髋点的x值。若右髋点的x值小于左髋点的x值,电子设备100可以提示用户向右转。其中,左髋点和右髋点之间的距离大于或等于阈值W12,且右髋点的x值小于左髋点的x值可以表示用户从正面朝向摄像头,向右转了一定角度。但用户向右转的角度不够,用户的身体没有完全侧面朝向摄像头,需要继续向右旋转。相反的,若右髋点的x值大于左髋点的x值,电子设备100可以提示用户向左转。
不限于通过上述左髋点和右髋点来判断用户的身体是否侧面朝向摄像头,电子设备100还可以通过其它方法来判断用户的身体是否侧面朝向摄像头。
示例性的,在引导用户完成体态评估姿态对应的动作的过程,电子设备100可以显示图10A所示的用户界面1010。用户界面1010可以包括显示区域1011和显示区域1012。其中:
显示区域1011可用于显示体态评估姿态的示例图1011A,以及动作提示1011B。由体态评估姿态的示例图1011A可以看出,体态评估姿态为身体侧面朝向摄像头,且自然站立的姿态作。动作提示1011B包含的内容可以为:请侧对屏幕站立。这样,用户可以根据示例图1011A和动作提示1011B来调整自己的姿态,以达到预设的体态评估姿态。
显示区域1012可用于显示电子设备100通过摄像头193采集的图像。该图像可包括进行体态评估的用户。用户可以根据显示区域1012中的图像查看自己的姿态。这样,用户可以直观地了解到自己的姿态与体态评估姿态的差别,从而可以快速地进行姿态调整。
由图10A所示的显示区域1012可以看出,用户的姿态与体态评估姿态不同。其中,用户正面朝向摄像头193,未侧对屏幕站立。电子设备100可以根据上述图9所示的实施例来判断用户的姿态与体态评估姿态的差别,在显示区域1011显示动作提示1011B。其中,电子设备100还可以语音播放动作提示1011B中的内容,以引导用户将姿态调整为体态评估姿态。
示例性的,用户可以根据图10A所示的动作提示1011B调整姿态。电子设备100可以显示图10B所示的用户界面1010。由图10B所示的显示区域1012可以看出,用户从图10A所示的正面朝向摄像头,向右转了一定角度。电子设备100可以根据上述图9所示的实施例对用户处于图10B所示显示区域1012的姿态的图像进行分析,确定出用户向右旋转的角度不够,用户的姿态不为体态评估姿态。电子设备100可以显示图10B所示的动作提示1011C。该动作提示1011C的内容可以为:请向右转。电子设备100还可以语音播放动作提示1011C的内容。
示例性的,用户可以根据动作提示1011C调整姿态。电子设备100可以显示图10C所示的用户界面1010。由图10C所示的显示区域1012可以看出,用户在图10B所示显示区域1012的姿态的基础上,向右转了一定角度。电子设备100可以根据上述图9所示的实施例对用户处于图10C所示显示区域1012的姿态的图像进行分析,确定出用户的姿态为体态评估姿态。电子设备100可以显示图10C所示的动作提示1011D,来提示用户保持体态评估姿态。动作提示1011D的内容可以为:请保持该姿态,即将拍照。电子设备100还可以语音播放动作提示1011D的内容。当提示用户保持体态评估姿态之后,电子设备100可以采集图像,并将采集得到的图像作为体态评估图像。
上述图10A~图10C所示的用户界面1010仅为示例性说明,不应对本申请构成限定。
由上述图10A~图10C所示的实施例可知,电子设备100提示用户调整姿态的提示内容可以随用户姿态的变化而变化。电子设备100可以针对用户的姿态与体态评估姿态不同的地方对用户进行提示。这可以让用户了解自己的哪个部位与体态评估姿态不同,从而更好地引导用户完成体态评估姿态对应的动作。
这里介绍本申请实施例提供的以用户的侧面图为体态评估图像,电子设备100评估用户是否驼背的实现方式。
在一种可能的实现方式中,电子设备100可以利用人体关键点在人体轮廓上确定出用户背部的边缘点来判断用户是否驼背。
示例性的,如图11A所示,电子设备100可以先根据人体关键点判断用户身体的朝向。可以理解的,用户可以从面朝摄像头的站立姿态,向右旋转来完成身体侧面朝向摄像头站立的姿态对应的动作。用户也可以从面朝摄像头的站立姿态,向左旋转来完成身体侧面朝向摄像头站立的姿态对应的动作。也就是说,用户的身体在像素坐标系中有可能朝向x轴的负方向(即左侧),人体轮廓上胸部区域的边缘点的x值小于背部区域的边缘点的x值。用户的身体在像素坐标系中也有可能朝向x轴的正方向(即右侧),人体轮廓上胸部区域的边缘点的x值大于背部区域的边缘点的x值。在一种可能的实现方式中,电子设备100可以利用用户的左脚踝点和右脚踝点来判断用户身体的朝向。其中,电子设备100可以比较像素坐标系中左脚踝点的y值和右脚踝点的y值。若左脚踝点的y值小于右脚踝点的y值,电子设备100可以确定用户的身体在像素坐标系中朝向x轴的负方向。若左脚踝点的y值大于右脚踝点的y值,电子设备100可以确定用户的身体在像素坐标系中朝向x轴的正方向。本申请实施例对电子设备100判断用户身体的朝向的方法不作限定。
本申请后续实施例中以用户的身体在像素坐标系中朝向x轴的负方向为例进行说明。本领域技术人员应当理解:在用户的身体在像素坐标系中朝向x轴的负方向的情况下判断用户是否驼背以及驼背的严重程度的方法,与在用户的身体在像素坐标系中朝向x轴的正方向的情况下判断用户是否驼背以及驼背的严重程度的方法相同。本申请实施例对在用户的身体在 像素坐标系中朝向x轴的正方向的情况下判断用户是否驼背以及驼背的严重程度的方法不展开说明。
在确定用户身体的朝向之后,电子设备100可以在人体轮廓上选取背部曲线。具体的,电子设备100可以确定经过左肩点,且与像素坐标系的y轴垂直的直线L19。直线L19与人体轮廓上背部区域的交点为G1。即交点G1为直线L19上在像素坐标系中的x值大于左肩点的x值,且与左肩点最接近的一个边缘点。电子设备100可以将颈部点和腹部点之间的线段三等分,并确定出该线段的三等分点中离腹部点最近的三等分点G2。电子设备100可以确定经过三等分点G2,且与像素坐标系的y轴垂直的直线L20。直线L20与人体轮廓上背部区域的交点为G3。即交点G3为直线L20上在像素坐标系中的x值大于三等分点G2的x值,且与三等分点G2最接近的一个边缘点。
电子设备100可以选取人体轮廓上背部区域交点G1和交点G3之间的曲线线段,得到背部曲线段G1G3。其中,背部曲线段G1G3上包含在像素坐标系中的y值大于或等于G3的y值,且y值小于或等于G1的y值的边缘点。背部曲线段G1G3上的边缘点在像素坐标系中的x值均大于左肩点的x值。
如图11B所示,电子设备100可以计算以交点G1和交点G3之间的直线线段为直径的圆的面积。例如,该圆的面积为S1。电子设备100还可以计算交点G1和交点G3之间的直线线段与背部曲线段G1G3构成的闭合区域的面积。例如,该闭合区域的面积为S2。电子设备100可以根据S2与S1的比值来判断用户是否驼背。若S2/S1小于预设的阈值W13,电子设备100可以确定用户没有驼背。上述阈值W13可以例如是0.05、0.1等等。本申请实施例对阈值W13的取值不作限定。若S2/S1大于或等于阈值W13,电子设备100可以确定用户驼背。其中,S2/S1越大,用户驼背的程度越严重。
可选的,电子设备100还可以利用背部曲线段G1G3上的边缘点,对背部曲线段G1G3进行平滑处理。本申请实施例对上述平滑处理的具体方法不作限定。电子设备100可以计算经过上述平滑处理后的曲线线段的曲率,并将最大曲率作为驼背指数。其中,电子设备100可以在上述平滑处理后的曲线线段上,每隔预设间隔(如4个像素、5个像素等)选取一个像素来计算在这一个像素所在位置上曲线线段的曲率。这可以减少电子设备100的计算量,提高判断用户是否驼背的效率。
若驼背指数小于预设的驼背指数阈值,电子设备100可以确定出用户没有驼背。若驼背指数大于或等于预设的驼背指数阈值,电子设备100可以确定出用户驼背。驼背指数越大,用户驼背的程度越严重。
本申请实施例对电子设备100选取背部曲线的方法不作限定。例如,电子设备100还可以根据颈部点和腹部点之间的线段的二等分点,或者将颈部点和腹部点之间的线段进行四等分以上的划分得到的等分点来选取背部曲线。
在一些实施例中,电子设备100可以在驼背的评估结果中显示用户的身体侧面朝向摄像头站立的人体图像,并在人体图像上标记图11A所示的背部曲线段G1G3。背部曲线段G1G3的弯曲程度可以体现用户驼背的严重程度。这样,用户可以通过背部曲线段G1G3直观地了解到自己是否驼背以及驼背的严重程度。
由上述实施例可知,驼背主要体现在用户的背部凸起。电子设备100可以先区分人体轮廓中的胸部区域和背部区域,进而在背部区域选取边缘点来判断用户是否驼背。这可以避免在判断过程中将胸部区域和背部区域的边缘点混淆,提高驼背检测结果的准确率。并且,人体轮廓上背部区域的边缘点的位置是确定的,利用背部区域的边缘点可以更加准确地判断用 户是否驼背以及驼背的严重程度。
在一些实施例中,电子设备100可以提供体态评估的选项。该体态评估的选项可以包括:高低肩、腿型、脊柱侧弯和驼背。电子设备100可以根据用户选择的体态评估的选项来确定是否仅需要拍摄人体正面的体态评估图像,或者仅需要拍摄人体侧面的体态评估图像,或者需要拍摄人体正面的体态评估图像和人体侧面的体态评估图像。
例如,当被选择的体态评估的选项包含高低肩、腿型、脊柱侧弯中的一项或多项,电子设备100可以根据前述图7A~图7D所示的实施例引导用户完成体态评估姿态1(如面朝摄像头,双臂伸直,双臂与身体两侧保持一定距离,双腿自然直立且并拢)对应的动作,从而拍摄人体正面的体态评估图像。电子设备100可以根据人体正面的体态评估图像对用户进行体态评估,得到高低肩评估结果、腿型评估结果、脊柱侧弯评估结果中的一项或多项。当被选择的体态评估的选项包含驼背,电子设备100可以根据前述图10A~图10C所示的实施例引导用户完成体态评估姿态2(如身体侧面朝向摄像头,且自然站立)对应的动作,从而拍摄人体侧面的体态评估图像。电子设备100可以根据人体侧面的体态评估图像对用户进行体态评估,得到驼背评估结果。
在一些实施例中,电子设备100可以引导用户完成上述体态评估姿态1和体态评估姿态2对应的动作,得到人体正面的体态评估图像和人体侧面的体态评估图像。然后,电子设备100可以对用户进行体态评估,得到高低肩评估结果、腿型评估结果、脊柱侧弯评估结果、驼背评估结果。
在一些实施例中,电子设备100可以引导用户完成体态评估姿态3对应的动作。上述体态评估姿态3可以是双臂与身体两侧保持一定距离,双腿自然直立且并拢的姿态。或者,上述体态评估姿态3还可以是双臂伸直,双臂与身体两侧保持一定距离,双腿自然直立且并拢的姿态。电子设备100可以拍摄用户在保持体态评估姿态3的状态下,多个角度的图像(如正面图、侧面图等等)。在一种可能的实现方式中,电子设备100可以先引导用户完成上述体态评估姿态1对应的动作,采集得到用户保持体态评估姿态3的正面图。然后,电子设备100可以提示用户保持体态评估姿态3,并向左(或向右)旋转90°,采集得到用户保持体态评估姿态3的侧面图。电子设备100还可以提示用户保持体态评估姿态3旋转除90°之外的角度(如45°等等),采集得到用户保持体态评估姿态3的其它角度的图像。
电子设备100可以利用上述多个角度的图像进行3D建模,得到姿态为体态评估姿态3的3D人体模型。电子设备100可以利用上述3D人体模型的正面图和侧面图进行体态评估,得到高低肩评估结果、腿型评估结果、脊柱侧弯评估结果、驼背评估结果。上述利用3D人体模型的正面图和侧面图进行体态评估的实现方法可以参考前述实施例对利用人体正面的体态评估图像和人体侧面的体态评估图像进行体态评估的实现方法。这里不再赘述。
可以理解的,利用3D人体模型确定出的人体轮廓,通常比直接利用对用户拍摄得到的图像进行人像分割确定出的人体轮廓更加平滑,可以更好地反映用户的姿态,提升体态评估准确率。
图12A~图12E示例性示出了本申请实施例提供的一些体态评估结果的用户界面示意图。
在一些实施例中,当接收到进行体态评估的用户操作,电子设备100可以采集用户的图像并进行体态评估。电子设备100可以存储体态评估结果。这样,用户可以查看该次以及曾 经进行体态评估后的结果。
如图12A所示,电子设备100可以显示用户界面1210。用户界面1210可用于用户查看一次或多次体态评估的结果。用户界面1210可包含标题1211、体态分析报告选项1212、体态分析报告选项1213、体态分析报告选项1214、人体示意图1215。其中:
标题1211可用于指示用户界面1210包含的内容。例如,标题1211可以为文字标识“历史报告”。即用户可以通过用户界面1210来查找自己已经进行体态评估,得到的体态评估结果。
体态分析报告选项1212~1214可以指示用户在不同时间进行体态评估得到的体态评估结果。例如,体态分析报告选项1212中可包含该选项对应的体态评估结果得到的时间(如2021年12月12日10点22分)。该时间也即用户进行体态评估的时间。体态分析报告选项1212可用于触发电子设备100显示该选项对应的体态评估结果。体态分析报告选项1213和体态分析报告选项1214可以参考体态分析报告选项1212的介绍。
人体示意图1215可用于展示用户的人体图像。该人体图像可以是电子设备100采集得到的图像中的人体图像。或者,该人体图像可以是用户的人体轮廓。或者,该人体图像可以是对用户进行3D建模得到的3D人体模型。本申请实施例对上述人体图像的显示形式不作限定。其中,当一个体态分析报告选项处于选中状态,该人体图像上可标记有处于选中状态的体态分析报告选项对应的体态评估结果。例如,体态分析报告选项1212处于选中状态。体态分析报告选项1212对应的体态评估结果中,用户有高低肩、驼背和脊柱侧弯,但腿型正常。人体示意图1215中的人体图像上肩部所在的部位标记有高低肩,背部对应的部位标记有驼背,躯干所在的部位标记有脊柱侧弯,腿部所在的部位标记有正常腿型。这样,用户可以在用户界面1210上快速了解到自己各次进行体态评估得到的体态评估结果。
响应于对体态分析报告选项1212的操作,例如触摸操作等,电子设备100可以显示图12B所示的用户界面1220。用户界面1220可以包含标题1221、时间1222、高低肩选项1223、脊柱侧弯选项1224、腿型选项1225、驼背选项1226、体态评估结果示意图1227、肩部示意图1223A、高低肩结果分析显示区域1223B、课程推荐显示区域1223C。其中:
标题1221可用于指示用户界面1220包含的内容。标题1221可以为文字标识“体态评估报告”。
时间1222可用于指示得到用户界面1220呈现的体态评估报告的时间。例如,2021年12月12日10点22分。
体态评估结果示意图1227可用于指示用户的体态问题。体态评估结果示意图1227可包括面朝摄像头的人体图像。该人体图像上可包含指示体态的标记(如点、线条等)。在一种可能的实现方式中,当利用用户的正面图可评估的体态问题对应的评估选项(如高低肩选项1223、脊柱侧弯选项1224、腿型选项1225)处于选中状态,电子设备100可以显示图12B所示的体态评估结果示意图1227。例如,体态评估结果示意图1227的人体图像上可包含指示用户是否有高低肩的标记、指示用户是否有脊柱侧弯的标记、指示用户的腿型的标记。这些标记可以参考前述图8B、图8D、图8I所示的实施例。这里不再赘述。
高低肩选项1223、脊柱侧弯选项1224、腿型选项1225、驼背选项1226可分别用于用户查看用户进行这一次体态评估得到的高低肩评估结果、脊柱侧弯评估结果、腿型评估结果、驼背评估结果。
响应于对高低肩选项1223的用户操作,电子设备100可以在用户界面1220上显示上述肩部示意图1223A、高低肩结果分析显示区域1223B、课程推荐显示区域1223C。其中,肩 部示意图1223A可以是上述体态评估结果示意图1227中肩部的局部放大图。肩部示意图1223A可以便于用户更清楚地查看自己肩部的情况。高低肩结果分析显示区域1223B中可显示有高低肩结果分析。该高低肩结果分析可包括哪个肩膀更高以及高度差(如右肩高于左肩:0.9厘米)、高低肩的严重程度(如轻度)等等。本申请实施例对高低肩结果分析的内容不作限定。当确定出用户有高低肩,电子设备100还可以为用户推荐用于改善高低肩的课程。课程推荐显示区域1223C可包含上述电子设备100推荐的课程。
如图12C所示,响应于对脊柱侧弯选项1224的操作,电子设备100可以在用户界面1220上显示躯干示意图1224A、脊柱侧弯结果分析显示区域1224B、课程推荐显示区域1224C。其中,躯干示意图1224A可以是上述体态评估结果示意图1227中胸腹部的局部放大图。躯干示意图1224A可以便于用户更清楚地查看自己是否有脊柱侧弯的问题。脊柱侧弯结果分析显示区域1224B中可显示有脊柱侧弯结果分析。脊柱侧弯结果分析可包括脊柱侧弯的严重程度(如中度)等等。当确定出用户有脊柱侧弯,电子设备100还可以为用户推荐用于改善脊柱侧弯的课程。课程推荐显示区域1224C可包含上述电子设备100推荐的课程。
如图12D所示,响应于对腿型选项1225的操作,电子设备100可以在用户界面1220上显示腿部示意图1225A、腿型结果分析显示区域1225B。其中,腿部示意图1225A可以是上述体态评估结果示意图1227中腿部的局部方法图。腿部示意图1225A可以便于用户更清楚地查看自己腿部的形状。腿型结果分析显示区域1225B中可显示有腿型结果分析。腿型结果分析可包括用户的腿型(如正常腿型、X型腿、O型腿、XO型腿)、用户的腿型为X型腿或O型腿或XO型腿时的严重程度等等。在一些实施例中洪,当确定出用户的腿型不为正常腿型,电子设备100还可以为用户推荐用于改善X型腿、或改善O型腿、或改善XO型腿的课程。
如图12E所示,响应于对驼背选项1226的操作,电子设备100可以在用户界面1220上显示体态评估结果示意图1228、背部示意图1226A、驼背结果分析显示区域1226B、课程推荐显示区域1226C。其中,体态评估结果示意图1228可用于指示用户的体态问题。体态评估结果示意图1228可包括用户身体侧面朝向摄像头的人体图像。该人体图像上可包含指示体态的标记(如点、线条等)。在一种可能的实现方式中,当利用用户的侧面图可评估的体态问题对应的评估选项(如驼背选项1226)处于选中状态,电子设备100可以显示图12E所示的体态评估结果示意图1228。例如,体态评估结果示意图1228的人体图像上可包含指示用户背部弯曲程度的曲线。上述标记可以参考前述图11A所示的实施例。这里不再赘述。
背部示意图1226A可以是上述体态评估结果示意图1228中背部的局部放大图。背部示意图1226A可以便于用户从身体侧面更清楚地查看自己背部的形态。驼背结果分析显示区域1226B中可显示有驼背结果分析。驼背结果分析可包括驼背系数、驼背的严重程度(如轻度)等等。当确定出用户驼背,电子设备100还可以为用户推荐用于改善驼背的课程。课程推荐显示区域1226C可包含上述电子设备100推荐的课程。
图12A~图12E所示的用户界面1210仅为示例性介绍,不应对本申请构成限定。
由上述图12A~图12E所示的实施例可知,电子设备100可以展示用户所进行的一次或多次体态评估的结果,以及针对相应的体态问题为用户推荐改善体态问题的课程。这样,用户可以根据电子设备100进行体态评估得到的结果了解自己的体态情况,并根据电子设备100推荐的课程及时调整自己的体态,从而提高健康水平。
需要说明的是,在不产生矛盾或冲突的情况下,本申请任意实施例中的任意特征,或任意特征中的任意部分都可以组合,组合后的技术方案也在本申请实施例的范围内。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (22)

  1. 一种基于图像的姿态判断方法,其特征在于,所述方法应用于电子设备,所述电子设备包含摄像头,所述方法包括:
    通过所述摄像头采集第一图像;
    根据所述第一图像识别第一对象;
    确定所述第一对象的第一组人体关键点;
    根据所述第一图像和所述第一对象获得第二图像,其中,所述第二图像中所述第一对象所在区域的像素的像素值与所述第二图像中其它像素的像素值不同;
    根据所述第一组人体关键点和所述第二图像,判断所述第一对象的姿态是否为第一姿态。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象的姿态是否为第一姿态,具体包括:
    根据所述第二图像确定第一人体轮廓,所述第一人体轮廓包括所述第二图像上,具有第一像素值的像素和具有第二像素值的像素的交界处的像素,所述第二图像上所述具有第一像素值的像素为所述第二图像中所述第一对象所在区域的像素,所述第二图像中所述具有第二像素值的像素为所述第二图像中其它像素;
    根据所述第一组人体关键点和所述第一人体轮廓,判断所述第一对象的姿态是否为所述第一姿态。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一组人体关键点和所述第一人体轮廓,判断所述第一对象的姿态是否为所述第一姿态,具体包括:
    根据所述第一组人体关键点在所述第一人体轮廓上确定出第一组边缘点;
    根据所述第一组边缘点判断所述第一对象的姿态是否为所述第一姿态。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述通过所述摄像头采集第一图像之前,所述方法还包括:
    显示第一界面,所述第一界面包含第一提示信息,所述第一提示信息用于提示所述第一对象保持所述第一姿态。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象的姿态是否为第一姿态之后,所述方法还包括:
    在判断出所述第一对象的姿态不为所述第一姿态的情况下,显示第二界面,所述第二界面包含第二提示信息,所述第二提示信息是根据所述第一对象的姿态与所述第一姿态的第一差别确定的,所述第二提示信息用于提示所述第一对象消除所述第一差别。
  6. 根据权利要求4或5所述的方法,其特征在于,所述显示第一界面之前,所述方法还包括:
    显示第三界面,所述第三界面包括第一选项;
    接收到对所述第一选项的第一操作,所述第一界面是所述电子设备基于所述第一操作显示的。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    在判断出所述第一对象的姿态为所述第一姿态的情况下,根据所述第一组人体关键点和所述第二图像,判断所述第一对象是否有第一体态问题。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:
    显示第四界面,所述第四界面包含第三提示信息,所述第三提示信息用于提示所述第一 对象保持第二姿态。
  9. 根据权利要求8所述的方法,其特征在于,所述第三界面还包括第二选项,所述显示第四界面之前,所述方法还包括:
    接收到对所述第二选项的第二操作,所述第四界面是所述电子设备基于所述第二操作显示的。
  10. 根据权利要求8或9所述的方法,其特征在于,所述显示第四界面之后,所述方法还包括:
    通过所述摄像头采集第三图像;
    根据所述第三图像识别所述第一对象;
    确定所述第一对象的第二组人体关键点;
    根据所述第二组人体关键点,判断所述第一对象的姿态是否为第二姿态,所述第二姿态与所述第一姿态不同。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    在判断出所述第一对象的姿态为所述第二姿态的情况下,根据所述第三图像和所述第一对象获得第四图像,其中,所述第四图像中所述第一对象所在区域的像素的像素值与所述第四图像中其它像素的像素值不同;
    根据所述第二组人体关键点和所述第四图像,判断所述第一对象是否有第二体态问题,所述第二体态问题与所述第一体态问题不同。
  12. 根据权利要求2-11中任一项所述的方法,其特征在于,所述第一姿态包括:双臂置于身体两侧且与腰部保持的距离在第一距离范围内,所述第一组人体关键点包括:第一左手腕点和所述第一右手腕点;所述根据所述第一组人体关键点和所述第一人体轮廓,判断所述第一对象的姿态是否为所述第一姿态,具体包括:
    判断所述第一左手腕点和所述第一右手腕点所在的第一直线与所述第一人体轮廓是否有6个交点。
  13. 根据权利要求2-12中任一项所述的方法,其特征在于,所述第一姿态包括:双腿自然直立且并拢,所述第一组人体关键点包括:第一左髋点、第一左膝点、第一左脚踝点、第一右髋点、第一右膝点、第一右脚踝点;所述根据所述第一组人体关键点和所述第一人体轮廓,判断所述第一对象的姿态是否为所述第一姿态,具体包括:
    根据所述第一左髋点和所述第一左膝点之间的第一左腿距离、所述第一左膝点和所述第一左脚踝点之间的第二左腿距离、所述第一左髋点和所述第一左脚踝点之间的第三左腿距离,判断所述第一左腿距离加所述第二左腿距离减所述第三左腿距离得到的第一距离是否小于第一阈值;根据所述第一右髋点和所述第一右膝点之间的第一右腿距离、所述第一右膝点和所述第一右脚踝点之间的第二右腿距离、所述第一右髋点和所述第一右脚踝点之间的第三右腿距离,判断所述第一右腿距离加所述第二右腿距离减所述第三右腿距离得到的第二距离是否小于所述第一阈值;
    判断所述第一左膝点和所述第一右膝点之间的第一线段与所述第一人体轮廓是否有2个交点,且当所述第一线段与所述第一人体轮廓有2个交点,判断所述第一线段与所述第一人体轮廓的2个交点之间的第三距离是否小于第二阈值;判断所述第一左脚踝点和所述第一右脚踝点之间的第二线段与所述第一人体轮廓是否有2个交点,且当所述第二线段与所述第一人体轮廓有2个交点,判断所述第二线段与所述第一人体轮廓的2个交点之间的第四距离是否小于第三阈值。
  14. 根据权利要求7-13中任一项所述的方法,其特征在于,所述第一体态问题包括以下一项或多项:高低肩、X型腿、O型腿、XO型腿、脊柱侧弯。
  15. 根据权利要求14所述的方法,其特征在于,所述第一组人体关键点包括第一左肩点和第一右肩点,所述第一组边缘点包括第一左肩边缘点和第一右肩边缘点,所述第一左肩边缘点是根据所述第一左肩点在所述第一人体轮廓上的左肩区域确定的,所述第一右肩边缘点是根据所述第一右肩点在所述第一人体轮廓上的右肩区域确定的;
    所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象是否有第一体态问题,具体包括:
    确定所述第一左肩边缘点和所述第一右肩边缘点所在的直线与水平线的第一夹角;
    判断所述第一夹角是否大于第一角度阈值。
  16. 根据权利要求14或15所述的方法,其特征在于,所述第一组人体关键点包括第一左膝点、第一右膝点、第一左脚踝点和第一右脚踝点,所述第一组边缘点包括第一左膝内边缘点、第一右膝内边缘点、第一左脚内边缘点、第一右脚内边缘点、M对小腿内边缘点;所述第一左膝内边缘点和所述第一右膝内边缘点是所述第一左膝点和所述第一右膝点之间的线段与所述第一人体轮廓的交点,所述第一左脚内边缘点和所述第一右脚内边缘点是所述第一左脚踝点和所述第一右脚踝点之间的线段与所述第一人体轮廓的交点;所述M对小腿内边缘点中的一对小腿内边缘点包含位于同一高度的左小腿内边缘点和右小腿内边缘点,所述左小腿内边缘点是所述第一人体轮廓上所述第一左膝内边缘点和所述第一左脚内边缘点之间的像素,所述右小腿内边缘点是所述第一人体轮廓上所述第一右膝内边缘点和所述第一右脚内边缘点之间的像素,M为正整数;
    所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象是否有第一体态问题,具体包括:
    确定所述第一左膝内边缘点和所述第一左脚内边缘点之间的第五距离,以及所述第一左膝内边缘点和所述第一右膝内边缘点之间的第六距离,并判断所述第六距离比所述第五距离的第一比值是否大于第四阈值;
    在所述第一比值小于或等于所述第四阈值的情况下,确定所述第一左脚内边缘点和所述第一右脚内边缘点之间的第七距离,并判断所述第七距离比所述第六距离的第二比值是否大于第五阈值;
    在所述第二比值小于或等于所述第五阈值的情况下,判断所述M对小腿内边缘点中任意一对小腿内边缘点之间的距离比所述第六距离的比值是否大于第六阈值。
  17. 根据权利要求14-16中任一项所述的方法,其特征在于,所述第一组人体关键点包括第一左肩点、第一右肩点、第一左髋点和第一右髋点,所述第一组边缘点包括第一左肩边缘点、第一右肩边缘点、第一左髋边缘点和第一右髋边缘点,所述第一左肩边缘点是根据所述第一左肩点在所述第一人体轮廓上的左肩区域确定的,所述第一右肩边缘点是根据所述第一右肩点在所述第一人体轮廓上的右肩区域确定的,所述第一左髋边缘点是根据所述第一左髋点在所述第一人体轮廓上的左腰区域确定的,所述第一右髋边缘点是根据所述第一右髋点在所述第一人体轮廓上的右腰区域确定的;
    所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象是否有第一体态问题,具体包括:
    确定所述第一左肩边缘点和所述第一左髋边缘点之间的第八距离,确定所述第一右肩边缘点和所述第一右髋边缘点之间的第九距离;
    判断所述第八距离和所述第九距离中的较小者比较大者的第三比值是否大于第七阈值。
  18. 根据权利要求7-13中任一项所述的方法,其特征在于,所述第一体态问题包括:驼背。
  19. 根据权利要求18所述的方法,其特征在于,所述第一组边缘点包括组成所述第一人体轮廓上背部区域的第一曲线段的边缘点;
    所述根据所述第一组人体关键点和所述第二图像,判断所述第一对象是否有第一体态问题,具体包括:
    确定所述第一曲线段的两个端点之间的第一直线段与所述第一曲线段所构成闭合区域的第一面积,确定以所述第一直线段为直径的圆的第二面积,判断所述第一面积比所述第二面积的第四比值是否大于第八阈值;或者,
    对所述第一曲线段进行平滑处理,得到第一平滑曲线段,判断所述第一平滑曲线段的最大曲率是否大于第九阈值。
  20. 一种电子设备,其特征在于,所述电子设备包括摄像头、存储器和处理器,所述摄像头用于采集图像,所述存储器用于存储计算机程序,所述处理器用于调用所述计算机程序,使得所述电子设备执行权利要求1-19中任一项所述的方法。
  21. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行,使得所述电子设备执行权利要求1-19中任一项所述的方法。
  22. 一种计算机程序产品,其特征在于,所述计算机程序产品包含计算机指令,当所述计算机指令在电子设备上运行,使得所述电子设备执行权利要求1-19中任一项所述的方法。
PCT/CN2023/070938 2022-01-21 2023-01-06 基于图像的姿态判断方法及电子设备 WO2023138406A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210073959.2 2022-01-21
CN202210073959.2A CN116503892A (zh) 2022-01-21 2022-01-21 基于图像的姿态判断方法及电子设备

Publications (1)

Publication Number Publication Date
WO2023138406A1 true WO2023138406A1 (zh) 2023-07-27

Family

ID=87320761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/070938 WO2023138406A1 (zh) 2022-01-21 2023-01-06 基于图像的姿态判断方法及电子设备

Country Status (2)

Country Link
CN (1) CN116503892A (zh)
WO (1) WO2023138406A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150161445A1 (en) * 2013-12-09 2015-06-11 Postech Academy - Industry Foundation Method of extracting ridge data and apparatus and method for tracking joint motion of object
CN110495889A (zh) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 体态评估方法、电子装置、计算机设备及存储介质
CN111079513A (zh) * 2019-10-28 2020-04-28 珠海格力电器股份有限公司 姿态提醒方法、装置、移动终端及存储介质
US20200193152A1 (en) * 2018-12-14 2020-06-18 Microsoft Technology Licensing, Llc Human Pose Estimation
CN111753721A (zh) * 2020-06-24 2020-10-09 上海依图网络科技有限公司 一种人体姿态的识别方法及装置
CN112070031A (zh) * 2020-09-09 2020-12-11 中金育能教育科技集团有限公司 体态检测方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150161445A1 (en) * 2013-12-09 2015-06-11 Postech Academy - Industry Foundation Method of extracting ridge data and apparatus and method for tracking joint motion of object
US20200193152A1 (en) * 2018-12-14 2020-06-18 Microsoft Technology Licensing, Llc Human Pose Estimation
CN110495889A (zh) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 体态评估方法、电子装置、计算机设备及存储介质
CN111079513A (zh) * 2019-10-28 2020-04-28 珠海格力电器股份有限公司 姿态提醒方法、装置、移动终端及存储介质
CN111753721A (zh) * 2020-06-24 2020-10-09 上海依图网络科技有限公司 一种人体姿态的识别方法及装置
CN112070031A (zh) * 2020-09-09 2020-12-11 中金育能教育科技集团有限公司 体态检测方法、装置及设备

Also Published As

Publication number Publication date
CN116503892A (zh) 2023-07-28

Similar Documents

Publication Publication Date Title
CN110163048B (zh) 手部关键点的识别模型训练方法、识别方法及设备
CN111738122B (zh) 图像处理的方法及相关装置
WO2021179773A1 (zh) 图像处理方法和装置
US20220176200A1 (en) Method for Assisting Fitness and Electronic Apparatus
WO2021147434A1 (zh) 基于人工智能的人脸识别方法、装置、设备及介质
CN111882642B (zh) 三维模型的纹理填充方法及装置
CN110765525B (zh) 生成场景图片的方法、装置、电子设备及介质
WO2021244411A1 (zh) 一种动作自适应同步方法及电子设备
WO2024021742A1 (zh) 一种注视点估计方法及相关设备
WO2022206494A1 (zh) 目标跟踪方法及其装置
CN113538321B (zh) 基于视觉的体积测量方法及终端设备
JP6651086B1 (ja) 画像分析プログラム、情報処理端末、及び画像分析システム
WO2023029547A1 (zh) 视频处理方法和电子设备
CN111768507A (zh) 图像融合方法、装置、计算机设备及存储介质
WO2022143314A1 (zh) 一种对象注册方法及装置
WO2022057384A1 (zh) 拍摄方法和装置
WO2019000464A1 (zh) 一种图像显示方法、装置、存储介质和终端
WO2021180095A1 (zh) 获取位姿的方法及装置
US20220277845A1 (en) Prompt method and electronic device for fitness training
WO2023138406A1 (zh) 基于图像的姿态判断方法及电子设备
EP4220478A1 (en) Gesture misrecognition prevention method, and electronic device
WO2024125319A1 (zh) 虚拟人物的显示方法及电子设备
CN114510192B (zh) 图像处理方法及相关装置
WO2023072195A1 (zh) 运动分析方法、装置、电子设备及计算机存储介质
WO2024160171A1 (zh) 一种视频处理方法及相关电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23742721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE