CN114596633A - Sitting posture detection method and terminal - Google Patents
Sitting posture detection method and terminal Download PDFInfo
- Publication number
- CN114596633A CN114596633A CN202210209655.4A CN202210209655A CN114596633A CN 114596633 A CN114596633 A CN 114596633A CN 202210209655 A CN202210209655 A CN 202210209655A CN 114596633 A CN114596633 A CN 114596633A
- Authority
- CN
- China
- Prior art keywords
- target
- terminal
- sitting posture
- target object
- screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 230000001133 acceleration Effects 0.000 claims description 53
- 230000005484 gravity Effects 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 31
- 230000004044 response Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 abstract description 18
- 239000010410 layer Substances 0.000 description 19
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 210000005069 ears Anatomy 0.000 description 8
- 210000003128 head Anatomy 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C9/00—Measuring inclination, e.g. by clinometers, by levels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/24—Reminder alarms, e.g. anti-loss alarms
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B7/00—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
- G08B7/06—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Remote Sensing (AREA)
- Environmental & Geological Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a sitting posture detection method and a terminal, and belongs to the technical field of electronics. The terminal can determine the positions of a plurality of key points of the target object based on a shot image obtained by shooting the target object, and if the sitting posture of the target object is determined to be the target sitting posture based on the positions of the key points, first prompt information is sent out to remind the target object to adjust the sitting posture in time. Therefore, the functions of the terminal are effectively enriched. And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
Description
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a sitting posture detection method and a terminal.
Background
With the function of the mobile phone becoming more and more abundant, the user can use the mobile phone to make a call, have a video call with the family, play games, watch entertainment items such as videos and the like. However, the functions of the current mobile phone are still single.
Disclosure of Invention
The embodiment of the disclosure provides a sitting posture detection method and a terminal, which can solve the problem that the function of a mobile phone is single in the related technology. The technical scheme is as follows:
in one aspect, a sitting posture detection method is provided and applied to a terminal, the terminal includes: a camera and a gravity sensor; the method comprises the following steps:
responding to a sitting posture detection instruction, shooting a target object through the camera to obtain a shot image;
determining initial positions of a plurality of key points of the target object in the shot image;
determining an inclination angle of a reference surface of the terminal relative to a horizontal plane based on the gravitational acceleration of the terminal detected by the gravity sensor, wherein the reference surface of the terminal is perpendicular to a screen of the terminal and is parallel to a first edge of the screen;
if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points;
if the terminal is determined to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the key points, wherein the corrected position of each key point is obtained by adjusting the initial position of the key point based on the inclination angle;
and if the sitting posture of the target object is the target sitting posture, sending first prompt information, wherein the first prompt information is used for prompting the target object to adjust the sitting posture.
Optionally, the determining, based on the acceleration of gravity of the terminal detected by the gravity sensor, an inclination angle of a reference plane of the terminal with respect to a horizontal plane includes:
acquiring a first acceleration component of the gravity acceleration of the terminal on a first axis and a second acceleration component of the gravity acceleration of the terminal on a second axis of a target coordinate system through the gravity sensor;
determining an inclination angle of a reference plane of the terminal with respect to a horizontal plane based on the first acceleration component and the second acceleration component;
the origin of the target coordinate system is a reference point of a screen of the terminal, a first axis of the target coordinate system is parallel to the first edge, a second axis of the target coordinate system is parallel to a second edge of the screen, and the second edge is perpendicular to the first edge.
Optionally, the inclination angle θ satisfies:
wherein gx is the first acceleration component and gy is the second acceleration component.
Optionally, the corrected position (x1, y1) of any key point of the target object satisfies:
wherein, the (x, y) is the initial position of any key point, and the theta is the inclination angle.
Optionally, the responding to the sitting posture detection instruction, capturing a target object through the camera to obtain a captured image, including:
responding to a sitting posture detection instruction, if the fact that the included angle between the screen of the terminal and the horizontal plane is larger than an angle threshold value is determined, shooting a target object through the camera to obtain a shot image.
Optionally, after receiving the sitting posture detection instruction, the method further includes:
acquiring a third acceleration component of the gravitational acceleration of the terminal on a third axis of the target coordinate system through the gravity sensor, wherein the third axis is perpendicular to the screen of the terminal;
and if the third acceleration component is larger than a target threshold value, determining that an included angle between the screen of the terminal and the horizontal plane is larger than an angle threshold value.
Optionally, the method further includes:
and if the included angle between the screen of the terminal and the horizontal plane is not larger than the angle threshold value, sending second prompt information, wherein the second prompt information is used for prompting the adjustment of the placing state of the terminal.
Optionally, the detecting the sitting posture of the target object includes:
if the absolute value of the slope of a connecting line between the target position of a first target key point and the target position of a second target key point in the plurality of key points is greater than a slope threshold, determining that the sitting posture of the target object is a target sitting posture;
the target position is the initial position or the corrected position, and the first target key point and the second target key point are key points of two symmetrical parts of the target object.
Optionally, the detecting the sitting posture of the target object includes:
if the distance between the target position of a third target key point and the target position of a fourth target key point in the plurality of key points in the height direction of the target object is not greater than a distance threshold, determining that the sitting posture of the target object is a target sitting posture;
the target position is the initial position or the corrected position, and the third target key point and the fourth target key point are key points of two positions in the height direction of the target object.
In another aspect, a terminal is provided, which includes: the system comprises a camera, a processor and a gravity sensor;
the camera is used for responding to the sitting posture detection instruction and shooting the target object to obtain a shot image;
the processor is configured to:
determining initial positions of a plurality of key points of the target object in the shot image;
determining an inclination angle of a reference surface of the terminal relative to a horizontal plane based on the gravitational acceleration of the terminal detected by the gravity sensor, wherein the reference surface of the terminal is perpendicular to a screen of the terminal and is parallel to a first edge of the screen;
if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points;
if the terminal is determined to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the key points, wherein the corrected position of each key point is obtained by adjusting the initial position of the key point based on the inclination angle;
and if the sitting posture of the target object is the target sitting posture, sending first prompt information, wherein the first prompt information is used for prompting the target object to adjust the sitting posture.
In another aspect, a terminal is provided, including: a memory, a processor and a computer program stored on the memory, the processor implementing the sitting posture detection method of the above aspect when executing the computer program.
In yet another aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the sitting posture detecting method provided by the above aspect.
In yet another aspect, a computer program product comprising instructions is provided, which when run on the computer, causes the computer to perform the sitting posture detection method provided in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the disclosure at least comprise:
the terminal can determine the positions of a plurality of key points of a target object based on a shot image obtained by shooting the target object, and if the sitting posture of the target object is determined to be the target sitting posture based on the positions of the key points, first prompt information is sent out to remind the target object to adjust the sitting posture in time. Therefore, the functions of the terminal are effectively enriched. And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a sitting posture detecting method provided by the embodiment of the disclosure;
fig. 3 is a flowchart of another sitting posture detecting method provided by the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target coordinate system provided by embodiments of the present disclosure;
FIG. 5 is a diagram illustrating a second prompt message provided by an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a first prompt message provided by an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of another terminal according to an embodiment of the present disclosure;
fig. 8 is a block diagram of a software structure of a terminal according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
The disclosed embodiment provides a terminal, which may include a processor 10, a camera 20, and a gravity sensor 30. Wherein the processor 10 is connected to the camera 20 and the gravity sensor 30, respectively. The front side of the terminal may be provided with a camera 20, or the back side of the terminal may be provided with a camera 20, or both the front and back sides of the terminal may be provided with cameras 20. The terminal can be a tablet personal computer or a mobile phone and the like provided with a camera and a gravity sensor. Fig. 1 illustrates the terminal as a mobile phone, and the front of the mobile phone is provided with a camera 20.
Fig. 2 is a flowchart of a sitting posture detecting method provided by an embodiment of the present disclosure, which may be applied to the processor 10 in the terminal shown in fig. 1. As shown in fig. 2, the method may include:
In the embodiment of the disclosure, the processor may respond to the sitting posture detection instruction, and shoot the target object through the camera to obtain the shot image. Wherein, this position of sitting detects the instruction and can be the instruction of opening to the camera. Optionally, a camera application may be installed in the terminal, and the processor may determine that a start instruction for the camera is detected after detecting a start instruction for the camera application.
In the embodiment of the disclosure, the processor obtains the shot image by shooting the target object through the camera, and can determine the initial positions of the plurality of key points of the target object in the shot image.
Wherein the target object may be a person, and the plurality of key points of the target object may include at least two key parts of the target object. For example, the plurality of key points of the target object may include key parts of the target object such as eyes, nose, mouth, ears, and shoulders.
And step 203, determining the inclination angle of the reference plane of the terminal relative to the horizontal plane based on the gravity acceleration of the terminal detected by the gravity sensor.
After detecting the sitting posture detection instruction, the processor can also determine the inclination angle of the reference surface of the terminal relative to the horizontal plane based on the gravity acceleration of the terminal detected by the gravity sensor. Wherein the reference plane of the terminal is perpendicular to the screen of the terminal and parallel to the first side of the screen. Alternatively, the first side of the screen may be a short side of the screen, or may also be a long side of the screen.
And 204, if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points.
In the embodiment of the disclosure, the processor may detect whether the terminal is placed obliquely based on the inclination angle. If the terminal is not placed obliquely, the sitting posture of the target object can be detected based on the initial positions of the key points.
And step 205, if the terminal is determined to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the key points.
In the embodiment of the disclosure, if the processor determines that the terminal is placed obliquely, it may be determined that there is a large error in the sitting posture of the target object detected by using the initial positions of the plurality of key points in this case. Therefore, the processor can adjust the initial position of each key point based on the inclination angle, obtain the corrected position of each key point, and detect whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the plurality of key points. Thereby ensuring the accuracy of detecting whether the sitting posture of the target object is the target sitting posture.
And step 206, if the sitting posture of the target object is the target sitting posture, sending out a first prompt message.
After determining the sitting posture of the target object, the processor may send out first prompt information if determining that the sitting posture of the target object is the target sitting posture, wherein the first prompt information is used for prompting the target object to adjust the sitting posture. If it is determined that the sitting posture of the target object is not the target sitting posture, the processor may continue to capture the target object through the camera in step 201 to obtain the captured image, and determine the initial position of the key point of the target object in the captured image to start the execution.
Wherein the target sitting posture is an incorrect sitting posture pre-stored in the processor. Optionally, the first prompt message may include a text prompt message and/or a voice prompt message, and if the first prompt message includes a text prompt message, the processor may display the text prompt message on a screen of the terminal, for example, the text prompt message may be "your sitting posture is incorrect, please adjust your sitting posture in time". If the first prompt message includes a voice prompt message, the processor may play the voice prompt message.
To sum up, the embodiment of the present disclosure provides a sitting posture detection method, where a terminal may determine positions of a plurality of key points of a target object based on a captured image obtained by capturing the target object, and send a first prompt message to remind the target object of adjusting a sitting posture in time if the sitting posture of the target object is determined to be a target sitting posture based on the positions of the plurality of key points. Therefore, the functions of the terminal are effectively enriched.
And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
Fig. 3 is a flowchart of another sitting posture detecting method provided by the embodiment of the present disclosure, which can be applied to the processor 10 in the terminal shown in fig. 1. As shown in fig. 3, the method may include:
The processor may detect whether an angle between a screen of the terminal and a horizontal plane is greater than a first angle threshold in response to the sitting posture detection instruction. If the angle between the screen of the terminal and the horizontal plane is not greater than (i.e., less than or equal to) the first angle threshold, the processor may determine that the angle between the screen of the terminal and the horizontal plane is smaller. In this case, the screen of the terminal is close to being parallel to the horizontal plane, and the camera in the terminal cannot photograph the target object, so the processor may perform step 302.
If the included angle between the screen of the terminal and the horizontal plane is greater than the first angle threshold, the processor may determine that the included angle between the screen of the terminal and the horizontal plane is larger, and the camera in the terminal may photograph the target object, so the processor may perform step 303.
Wherein the target object may be a person. The sitting posture detection instruction may be a turn-on instruction for the camera. Optionally, a camera application may be installed in the terminal, and the processor may determine that a start instruction for the camera is detected after detecting a start instruction for the camera application.
In the embodiment of the disclosure, in the process of detecting whether an included angle between the screen of the terminal and the horizontal plane is greater than a first angle threshold, the processor may obtain, in response to the sitting posture detection instruction, the gravitational acceleration of the terminal and a third acceleration component on a third axis of the target coordinate system through the gravity sensor. If the third acceleration component is not greater than (i.e., less than or equal to) the target threshold, the processor may determine that the angle between the screen of the terminal and the horizontal plane is not greater than the first angle threshold. If the third acceleration component is greater than the target threshold, the processor may determine that an angle between the screen of the terminal and the horizontal plane is greater than a first angle threshold.
The target threshold may be a fixed value pre-stored in the processor. The origin of the target coordinate system XYZ is a reference point of the screen of the terminal, which may be any point on the screen of the terminal. For example, referring to fig. 4, the reference point of the terminal may be a center point O of the screen of the terminal. The first axis X of the target coordinate system is parallel to the first side of the screen of the terminal, the second axis Y of the target coordinate system is parallel to the second side of the screen, and the third axis Z of the target coordinate system is perpendicular to the screen of the terminal. The second side is perpendicular to the first side, and optionally, the first side of the screen may be a short side of the screen, and the second side may be a long side of the screen. Or the first side of the screen may be a long side of the screen and the second side may be a short side of the screen.
And step 302, displaying the second prompt message.
If the processor determines that the included angle between the screen of the terminal and the horizontal plane is not larger than the first angle threshold value, second prompt information can be sent out, and the second prompt information is used for prompting the adjustment of the placing state of the terminal.
The second prompt message may include a text prompt message and/or a voice prompt message, and if the second prompt message is a text prompt message, the processor may display the text prompt message on a screen of the terminal. If the second prompt message is a voice prompt message, the processor can play the voice prompt message.
Assuming that the second prompt message is a text prompt message, referring to fig. 5, the processor may display the text prompt message on a screen of the terminal, and the text prompt message 001 may be "the camera cannot shoot you, please adjust the placement state of the terminal".
It can be understood that, in the process of adjusting the placing state of the terminal by the target object, the processor may detect whether an included angle between the screen of the terminal and the horizontal plane is greater than a first angle threshold value in real time. And after determining that the included angle between the screen of the terminal and the horizontal plane is greater than the first angle threshold, executing step 303.
In the embodiment of the present disclosure, it is assumed that both the front and the back of the terminal are provided with cameras, that is, the terminal is provided with a front camera and a rear camera. If the processor responds to the sitting posture detection instruction, the front camera is started, but the target object is located on one side of the rear camera, and at the moment, even if the included angle between the screen of the terminal and the horizontal plane is larger than the first angle threshold value, the camera cannot acquire the image of the target object. Therefore, the processor can detect whether the camera acquires the image of the target object after detecting whether the included angle between the screen of the terminal and the horizontal plane is larger than the first angle threshold value. If the included angle between the screen of the terminal and the horizontal plane is not larger than the first angle threshold value, and/or the camera does not acquire the image of the target object, the processor can send out second prompt information. If the included angle between the screen of the terminal and the horizontal plane is greater than the first angle threshold and the camera acquires the image of the target object, the processor may execute step 303. Therefore, the camera can effectively acquire the image of the target object.
And step 303, shooting the target object through the camera to obtain a shot image.
If the processor determines that the included angle between the screen of the terminal and the horizontal plane is larger than the first angle threshold value, the target object can be shot through the camera to obtain a shot image.
And step 304, determining initial positions of a plurality of key points of the target object in the shot image.
The processor can determine initial positions of a plurality of key points of the target object in the shot image after the shot image is obtained by shooting the target object through the camera. The plurality of key points of the target object may be a plurality of key parts of the target object, for example, the plurality of key points of the target object may include a plurality of key parts of the target object, such as eyes, nose, mouth, ears, shoulders, elbows and hands.
In the embodiment of the disclosure, the processor may detect a plurality of key points of the target object in the captured image by using the limb key point detection model, determine the position of each key point in the image coordinate system by using the limb key point detection model, and finally determine the position of each key point in the image coordinate system as the initial position of the key point.
The limb key point detection model can be a mediapoint model, and the mediapoint model is a multimedia machine learning model application framework. The origin of the image coordinate system is a reference point of the captured image, for example, the origin of the image coordinate system is a center point of the captured image, a horizontal axis of the image coordinate system is parallel to a first side of the captured image, a vertical axis of the image coordinate system is parallel to a second side of the captured image, and the first side and the second side of the captured image are perpendicular. It will be appreciated that the two-dimensional coordinate system formed by the first axis and the second axis of the image coordinate system and the target coordinate system may be the same coordinate. Therefore, the initial position of each key point of the target object is the initial position of the key point in the target coordinate system.
And 305, determining the inclination angle of the reference plane of the terminal relative to the horizontal plane based on the gravity acceleration of the terminal detected by the gravity sensor.
The processor may further determine an inclination angle of the reference plane of the terminal with respect to a horizontal plane based on a gravitational acceleration of the terminal detected by the gravitational sensor after determining initial positions of a plurality of key points of the target object in the captured image. Wherein the reference plane of the terminal is perpendicular to the screen of the terminal and parallel to the first side of the screen. The first side may be a long side of the screen or a short side of the screen.
Alternatively, the processor may obtain the gravitational acceleration of the terminal, the first acceleration component gx on the first axis X and the second acceleration component gy on the second axis Y of the target coordinate system XYZ, through the gravity sensor. Also, referring to fig. 4, the processor may determine an inclination angle θ of the reference plane of the terminal with respect to the horizontal plane based on the first acceleration component gx and the second acceleration component gy. Wherein the inclination angle θ can satisfy:
it will be appreciated that if the first side of the screen is the short side of the screen, the angle of inclination θ of the reference plane of the terminal with respect to the horizontal is the angle between the first axis X of the target coordinate system XYZ and the first axis X1 of the plumb coordinate system X1Y 1. If the first side of the screen is the long side of the screen, the tilt angle θ of the reference plane of the terminal with respect to the horizontal plane is the angle between the second axis Y of the target coordinate system XYZ and the first axis X1 of the plumb coordinate system X1Y 1.
Wherein the origin of the plumb coordinate system X1Y1 may be a reference point of the screen of the terminal, for example, referring to fig. 4, the origin of the plumb coordinate system X1Y1 may be a center point of the screen of the terminal, the first axis X1 of the plumb coordinate system X1Y1 is parallel to the horizontal plane, and the second axis Y1 of the plumb coordinate system X1Y1 is perpendicular to the horizontal plane.
And step 306, detecting whether the terminal is obliquely placed or not based on the inclination angle.
The processor, after determining the tilt angle of the reference plane of the terminal with respect to the horizontal plane, may detect whether the terminal is lying tilted based on the tilt angle. If the terminal is not tilted, the processor may execute step 307. If the terminal is tilted, the processor may perform step 309.
If the first edge of the screen is the short edge of the screen, the processor may detect whether the tilt angle is greater than a second angle threshold, and if the tilt angle is not greater than (i.e., less than or equal to) the second angle threshold, the processor may determine that the terminal is not tilted and thus the processor may perform step 307. If the inclination angle is greater than the second angle threshold, the processor may determine that the terminal is placed obliquely, in which case there is a large error in detecting the sitting posture of the target object using the initial positions of the plurality of key points. The processor may perform step 309. Wherein the second angle threshold may be a fixed angle pre-stored in the processor.
If the first edge of the screen is the long edge of the screen, the processor determines that the terminal is not placed obliquely if the inclination angle is larger than the second angle threshold. If it is determined that the tilt angle is not greater than (i.e., less than or equal to) the second angle threshold, the processor may determine that the terminal is tilted to be positioned.
And 307, detecting whether the sitting posture of the target object is the target sitting posture or not based on the initial positions of the plurality of key points.
If the processor determines that the terminal is not placed obliquely, whether the sitting posture of the target object is the target sitting posture or not can be detected based on the initial positions of the key points. If the target subject's sitting posture is determined to be the target sitting posture, the processor may perform step 308. If the sitting posture of the target object is determined not to be the target sitting posture, the process may continue from step 303.
In an optional implementation manner of the embodiment of the present disclosure, after determining that the inclination angle is not greater than the first angle threshold, the processor may determine a first target keypoint and a second target keypoint from the plurality of keypoints, and may determine an absolute value of a slope of a connection line between a target position of the first target keypoint and a target position of the second target keypoint. If the absolute value of the slope is greater than the slope threshold, the processor may determine that the target subject's sitting posture is the target sitting posture. If the absolute value of the slope is not greater than the slope threshold, the processor may determine that the target subject's sitting posture is not the target sitting posture.
The target position may be an initial position, and the slope threshold may be a fixed value pre-stored in the processor. Target position (x) of the first target keypoint11,y11) With the target position (x) of the second target keypoint22,y22) The absolute value k of the slope of the connecting line between can satisfy:
the first target keypoint and the second target keypoint are keypoints of two symmetrical parts of the target object. For example, the first target keypoint and the second target keypoint are both keypoints of an eye of the target object, and are respectively a keypoint of a left eye and a keypoint of a right eye of the target object. Or, the first target keypoint and the second target keypoint are both keypoints of ears of the target object, and are respectively keypoints of a left ear and a right ear of the target object. Alternatively, the first target keypoint and the second target keypoint may both be keypoints of shoulders of the target object, and are keypoints of a left shoulder and a right shoulder of the target object, respectively.
It is assumed that the first target keypoint and the second target keypoint are both keypoints of eyes of the target object and are respectively keypoints of a left eye and a right eye of the target object, or the first target keypoint and the second target keypoint are both keypoints of ears of the target object and are respectively keypoints of a left ear and a right ear of the target object. If the processor determines that the initial position of the first target key point and the absolute value of the slope of the connecting line between the initial position of the first target key point and the initial position of the second target key point are larger than the slope threshold, the processor can determine that the head of the target object is excessively inclined, and accordingly can determine that the posture of the target object is the target sitting posture.
It is assumed that the first target keypoint and the second target keypoint may both be keypoints of a shoulder of the target object, and are respectively keypoints of a left shoulder and a right shoulder of the target object. If the processor determines that the initial position of the first target key point and the absolute value of the slope of the connecting line between the initial position of the second target key point are larger than the slope threshold, the processor can determine that the body of the target object is excessively inclined, and therefore the posture of the target object can be determined to be the target sitting posture.
In another optional implementation manner of the embodiment of the present disclosure, the processor may determine a third target keypoint and a fourth target keypoint from the plurality of keypoints, and may determine a distance between a target position of the third target keypoint and a target position of the fourth target keypoint in a height direction of the target object. If the distance is not greater than the distance threshold, the processor may determine that the target subject's sitting posture is the target sitting posture. If the distance is greater than the distance threshold, the processor may determine that the target subject's sitting posture is not the target sitting posture.
Wherein the target position is an initial position. The distance threshold may be a pre-stored fixed distance in the processor. The third target key point and the fourth target key point may be key points of two locations in a height direction of the target object. For example, the third target keypoint may be a keypoint of an eye of the target object and the fourth target keypoint may be a keypoint of a shoulder of the target object. Alternatively, the third target keypoint may be a keypoint of an ear of the target object and the fourth target keypoint may be a keypoint of a shoulder of the target object. Alternatively, the third target keypoint may be a keypoint of two eyes, a keypoint of a nose and a center point of keypoints of two ears of the target object, and the fourth target keypoint may be a keypoint of a shoulder of the target object.
Assuming that the third target keypoint is a keypoint of an eye of the target object, the fourth target keypoint may be a keypoint of a shoulder of the target object. If the processor determines that the distance between the initial position of the third target key point and the initial position of the fourth target key point in the height direction of the target object is not greater than the distance threshold, the processor may determine that the target object is excessively lowered, and thus may determine that the posture of the target object is the target sitting posture.
And 308, sending out first prompt information.
The processor can send out first prompt information if the sitting posture of the target object is determined to be the target sitting posture based on the initial positions of the key points. The first prompt message is used for prompting the target object to adjust the sitting posture, and the first prompt message may include a text prompt message and/or a voice prompt message.
If the first prompt message includes a text prompt message, the processor may display the text prompt message on a screen of the terminal. If the first prompt message includes a voice prompt message, the processor may play the voice prompt message.
Assuming that the processor determines that the head of the target object is excessively inclined and the first prompt message is a text prompt message, referring to fig. 6, the processor may display the text prompt message 002 on the screen of the terminal, and the text prompt message 002 may be "your head is excessively inclined, please adjust your sitting posture in time".
Assuming that the processor determines that the body of the target object is excessively inclined and the first prompt message is a text prompt message, the processor may display the text prompt message on a screen of the terminal, where the text prompt message may be "your body is excessively inclined, please adjust your sitting posture in time".
Assuming that the processor determines that the target object is excessively low in head and the first prompt message is a text prompt message, the processor may display the text prompt message on a screen of the terminal, where the text prompt message may be "you are excessively low in head, please adjust your sitting posture in time".
If the processor determines that the tilt angle is greater than the first angle threshold, the processor may determine a revised position (x1, y1) for each keypoint based on the tilt angle θ and the initial position (x, y) of the keypoint.
Wherein, the corrected position (X1, Y1) of any key point of the target object in the plumb coordinate system X1Y1 can satisfy the following conditions:
it is understood that the corrected position of the keypoint in the embodiments of the present disclosure refers to the corrected position of the keypoint in the plumb coordinate system.
And 310, detecting whether the sitting posture of the target object is the target sitting posture or not based on the corrected positions of the plurality of key points.
The processor, after determining the corrected positions of the key points of the target object, may detect whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the plurality of key points. If the sitting posture of the target object is determined to be the target sitting posture, the processor may perform step 308. If it is determined that the target subject's sitting posture is not the target sitting posture, the processor may start again from step 303 above.
In the process of detecting whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the plurality of key points, in an optional implementation manner of the embodiment of the disclosure, after determining the corrected position of each key point, the processor may determine a first target key point and a second target key point from the plurality of key points, and may determine an absolute value of a slope of a connecting line between the target position of the first target key point and the target position of the second target key point. If the absolute value of the slope is greater than the slope threshold, the processor may determine that the target subject's sitting posture is the target sitting posture. If the absolute value of the slope is not greater than the slope threshold, the processor may determine that the target subject's sitting posture is not the target sitting posture. Wherein the target position is a corrected position.
It is assumed that the first target keypoint and the second target keypoint may both be keypoints of an eye of the target object, and are keypoints of a left eye and a right eye of the target object, respectively. Alternatively, the first and second target keypoints may be keypoints of ears of the target object, and are keypoints of left ears and right ears of the target object, respectively. If the processor determines that the corrected position of the first target key point and the absolute value of the slope of the connecting line between the corrected positions of the second target key point are larger than the slope threshold value, the processor can determine that the head of the target object is excessively inclined, and therefore the posture of the target object can be determined to be the target sitting posture.
Assuming that the first target key point and the second target key point are key points of the shoulders of the target object and are key points of the left shoulder and key points of the right shoulder of the target object respectively, if the absolute value of the slope of a connecting line between the corrected positions of the first target key point and the second target key point is greater than the slope threshold value when the corrected position of the first target key point is determined, the processor can determine that the body of the target object is excessively inclined, and thus can determine that the posture of the target object is the target sitting posture.
In another optional implementation manner of the embodiment of the present disclosure, the processor may determine a third target keypoint and a fourth target keypoint from the plurality of keypoints, and may determine a distance between a target position of the third target keypoint and a target position of the fourth target keypoint in the height direction of the target object. If the distance is not greater than the distance threshold, the processor may determine that the target subject's sitting posture is the target sitting posture. If the distance is greater than the distance threshold, the processor may determine that the target subject's sitting posture is not the target sitting posture. Wherein the target position is a corrected position.
It is assumed that the third target keypoint is a keypoint of an eye of the target object and the fourth target keypoint is a keypoint of a shoulder of the target object. If the processor determines that the distance between the corrected position of the third target key point and the corrected position of the fourth target key point in the height direction of the target object is not greater than the distance threshold, the processor can determine that the target object is excessively lowered, and therefore the posture of the target object can be determined as the target sitting posture.
In the embodiment of the disclosure, if the processor determines that the terminal is placed obliquely, it may be determined whether there is a large error in detecting the sitting posture of the target object to be the target sitting posture by using the initial positions of the plurality of key points in this case. The processor may determine a corrected position of each key point based on the inclination angle and the initial position of the key point, and detect whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the plurality of key points. Thereby ensuring the accuracy of detecting whether the sitting posture of the target object is the target sitting posture.
It should be noted that the sequence of the steps of the sitting posture detecting method provided by the embodiment of the present disclosure may be appropriately adjusted, and the steps may also be deleted according to the situation, for example, the step 302 may be deleted according to the situation. Any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is covered by the protection scope of the present application, and thus the detailed description thereof is omitted.
To sum up, the embodiment of the present disclosure provides a sitting posture detection method, where a terminal may determine positions of a plurality of key points of a target object based on a captured image obtained by capturing the target object, and send a first prompt message to remind the target object of adjusting a sitting posture in time if the sitting posture of the target object is determined to be a target sitting posture based on the positions of the plurality of key points. Therefore, the functions of the terminal are effectively enriched. And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
By adopting the terminal, the sitting posture of the target object can be monitored in real time, and the target object can be reminded in time when the sitting posture of the target object is the target sitting posture. Moreover, the target object does not need to additionally wear a special sitting posture detection device, and the use by a user is facilitated.
The disclosed embodiment provides a terminal, which may include a processor 10, a camera 20, and a gravity sensor 30, as shown in fig. 1.
And the camera 20 is used for responding to the sitting posture detection instruction and shooting the target object to obtain a shot image.
The processor 10 is configured to:
initial positions of a plurality of key points of a target object in a captured image are determined.
Based on the acceleration of gravity of the terminal detected by the gravity sensor 30, the tilt angle of the reference plane of the terminal with respect to the horizontal plane is determined, the reference plane of the terminal being perpendicular to the screen of the terminal and parallel to the first side of the screen.
And if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points.
And if the terminal is determined to be obliquely placed based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the plurality of key points, wherein the corrected position of each key point is obtained by adjusting the initial position of the key point based on the inclination angle.
And if the sitting posture of the target object is the target sitting posture, sending first prompt information, wherein the first prompt information is used for prompting the target object to adjust the sitting posture.
To sum up, the embodiment of the present disclosure provides a terminal, where the terminal may determine positions of a plurality of key points of a target object based on a captured image obtained by capturing the target object, and send a first prompt message to remind the target object of adjusting a sitting posture if the sitting posture of the target object is determined to be a target sitting posture based on the positions of the plurality of key points. Therefore, the functions of the terminal are effectively enriched. And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
Optionally, the processor 10 is configured to:
a first acceleration component of the gravity acceleration of the terminal on a first axis and a second acceleration component of the gravity acceleration of the terminal on a second axis of the target coordinate system are obtained through the gravity sensor.
Based on the first acceleration component and the second acceleration component, an inclination angle of a reference plane of the terminal with respect to a horizontal plane is determined.
The origin of the target coordinate system is a reference point of a screen of the terminal, a first axis of the target coordinate system is parallel to the first edge, a second axis of the target coordinate system is parallel to the second edge of the screen, and the second edge is perpendicular to the first edge.
Optionally, the inclination angle θ satisfies:
wherein gx is the first acceleration component and gy is the second acceleration component.
Optionally, the corrected position (x1, y1) of any key point of the target object satisfies:
wherein, (x, y) is the initial position of any key point, and theta is the inclination angle.
Optionally, the processor 10 is configured to: responding to the sitting posture detection instruction, if the fact that the included angle between the screen of the terminal and the horizontal plane is larger than the angle threshold value is determined, shooting the target object through the camera to obtain a shot image.
Optionally, the processor 10 is further configured to obtain, by the gravity sensor, a third acceleration component of the gravitational acceleration of the terminal on a third axis of the target coordinate system, where the third axis is perpendicular to the screen of the terminal.
And if the third acceleration component is larger than the target threshold, determining that the included angle between the screen of the terminal and the horizontal plane is larger than the angle threshold.
Optionally, the processor 10 is further configured to send a second prompt message if it is determined that an included angle between the screen of the terminal and the horizontal plane is not greater than an angle threshold, where the second prompt message is used to prompt to adjust the placement state of the terminal.
Optionally, the processor 10 is further configured to determine that the sitting posture of the target object is the target sitting posture if an absolute value of a slope of a connection line between the target position of the first target key point and the target position of the second target key point in the plurality of key points is greater than a slope threshold;
the target position is an initial position or a corrected position, and the first target key point and the second target key point are key points of two symmetrical parts of the target object.
Optionally, the processor 10 is further configured to determine that the sitting posture of the target object is the target sitting posture if a distance between the target position of the third target key point in the plurality of key points and the target position of the fourth target key point in the height direction of the target object is not greater than a distance threshold.
The target position is an initial position or a corrected position, and the third target key point and the fourth target key point are key points of two positions in the height direction of the target object.
To sum up, the embodiment of the present disclosure provides a terminal, where the terminal may determine positions of a plurality of key points of a target object based on a captured image obtained by capturing the target object, and send a first prompt message to remind the target object of adjusting a sitting posture if the sitting posture of the target object is determined to be a target sitting posture based on the positions of the plurality of key points. Therefore, the functions of the terminal are effectively enriched. And if the terminal determines that the target object is placed in an inclined manner, the terminal can adjust the initial position of each key point based on the inclination angle of the terminal to obtain the corrected position of the key point, and determine whether the sitting posture of the target object is the target sitting posture based on the corrected positions of the key points. Therefore, the accuracy of detecting the sitting posture of the target object is effectively improved.
Fig. 7 is a schematic structural diagram of another terminal provided in an embodiment of the present disclosure, as shown in fig. 7, the terminal further includes: a display unit 130, a memory 140, a Radio Frequency (RF) circuit 150, an audio circuit 160, a wireless fidelity (Wi-Fi) module 170, a bluetooth module 180, a power supply 190, and the like.
The camera 20 may be used to capture still pictures or video, among other things. The object generates an optical picture through the lens and projects the optical picture to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensitive elements convert the light signals into electrical signals which are then passed to a processor 10 for conversion into digital picture signals.
The processor 10 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by running or executing software programs stored in the memory 140 and calling data stored in the memory 140. In some embodiments, processor 10 may include one or more processing units; the processor 10 may also integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a baseband processor, which primarily handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 10. The processor 10 in the present application can run the operating system sitting posture detection method, the application program, the user interface display and the touch response, and the embodiments of the present disclosure are described. In addition, the processor 10 is coupled with an input unit and a display unit 130.
The display unit 130 may be used to receive input numeric or character information and generate signal inputs related to user settings and function control of the terminal, and optionally, the display unit 130 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the terminal. The display unit 130 may include a display screen 131 disposed at the front of the terminal. The display screen 131 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The display unit 130 includes: a display screen 131 and a touch screen 132 disposed on the front of the terminal. The display screen 131 may be used to display preview pictures. Touch screen 132 may collect touch operations on or near by the user, such as clicking a button, dragging a scroll box, and the like. The touch screen 132 may be covered on the display screen 131, or the touch screen 132 and the display screen 131 may be integrated to implement the input and output functions of the terminal, and after the integration, the touch screen may be referred to as a touch display screen for short.
The RF circuit 150 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 10 for processing; the uplink data may be transmitted to the base station. In general, RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal may be further provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the RF circuit 150 to be transmitted to, for example, another terminal or outputs the audio data to the memory 140 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and a terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a Wi-Fi module 170, and provides wireless broadband internet access for the user.
And the Bluetooth module 180 is used for performing information interaction with other Bluetooth devices with Bluetooth modules through a Bluetooth protocol. For example, the terminal may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that is also equipped with a bluetooth module through the bluetooth module 180, so as to perform data interaction.
The terminal also includes a power supply 190 (such as a battery) to power the various components. The power supply may be logically coupled to the processor 10 through a power management system to manage charging, discharging, and power consumption functions through the power management system. The terminal can also be provided with a power button for starting and shutting down the terminal, locking the screen and the like.
The terminal may further include at least one sensor 1110, such as a motion sensor 11101, a distance sensor 11102, and a fingerprint sensor 11103. The terminal may also be equipped with other sensors such as gyroscopes, barometers, hygrometers, thermometers, and infrared sensors.
Fig. 8 is a block diagram of a software structure of a terminal according to an embodiment of the present disclosure. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the android system is divided into four layers, an application layer, an application framework layer, an android runtime (android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 8, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 8, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, pictures, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing a communication function of the terminal. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the communication terminal vibrates, and an indicator light flashes.
The android runtime comprises a core library and a virtual machine. The android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still picture files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, picture rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
An embodiment of the present disclosure provides a terminal, including: a memory, a processor 10 and a computer program stored in the memory, wherein the processor when executing the computer program implements the sitting posture detecting method provided by the above embodiments, for example, the method shown in fig. 2 or fig. 3.
The embodiment of the present disclosure provides a computer-readable storage medium, in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the sitting posture detection method provided in the above embodiment, for example, the method shown in fig. 2 or fig. 3.
The embodiments of the present disclosure provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the sitting posture detection method provided by the above embodiments, for example, the method shown in fig. 2 or fig. 3.
It is to be understood that the terms "first," "second," and "third" in the disclosed embodiments are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" in the embodiments of the present disclosure means two or more. The term "and/or" in the embodiments of the present disclosure is only one kind of association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only and not to limit the present disclosure, and any modification, equivalent replacement, or improvement made without departing from the spirit and scope of the present disclosure is to be considered as the same as the present disclosure.
Claims (10)
1. A sitting posture detection method is applied to a terminal, and the terminal comprises: a camera and a gravity sensor; the method comprises the following steps:
responding to a sitting posture detection instruction, shooting a target object through the camera to obtain a shot image;
determining initial positions of a plurality of key points of the target object in the shot image;
determining an inclination angle of a reference surface of the terminal relative to a horizontal plane based on the gravitational acceleration of the terminal detected by the gravity sensor, wherein the reference surface of the terminal is perpendicular to a screen of the terminal and is parallel to a first edge of the screen;
if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points;
if the terminal is determined to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the key points, wherein the corrected position of each key point is obtained by adjusting the initial position of the key point based on the inclination angle;
and if the sitting posture of the target object is the target sitting posture, sending first prompt information, wherein the first prompt information is used for prompting the target object to adjust the sitting posture.
2. The method according to claim 1, wherein the determining the tilt angle of the reference plane of the terminal with respect to the horizontal plane based on the acceleration of gravity of the terminal detected by the gravity sensor comprises:
acquiring a first acceleration component of the gravity acceleration of the terminal on a first axis and a second acceleration component of the gravity acceleration of the terminal on a second axis of a target coordinate system through the gravity sensor;
determining an inclination angle of a reference plane of the terminal with respect to a horizontal plane based on the first acceleration component and the second acceleration component;
the origin of the target coordinate system is a reference point of a screen of the terminal, a first axis of the target coordinate system is parallel to the first edge, a second axis of the target coordinate system is parallel to a second edge of the screen, and the second edge is perpendicular to the first edge.
5. The method according to any one of claims 1 to 3, wherein the capturing a target object by the camera in response to a sitting posture detection instruction to obtain a captured image comprises:
responding to a sitting posture detection instruction, if the fact that the included angle between the screen of the terminal and the horizontal plane is larger than an angle threshold value is determined, shooting a target object through the camera to obtain a shot image.
6. The method of claim 5, wherein after receiving the sitting posture detection instruction, the method further comprises:
acquiring a third acceleration component of the gravitational acceleration of the terminal on a third axis of the target coordinate system through the gravity sensor, wherein the third axis is perpendicular to the screen of the terminal;
and if the third acceleration component is larger than a target threshold value, determining that an included angle between the screen of the terminal and the horizontal plane is larger than an angle threshold value.
7. The method of claim 5, further comprising:
and if the included angle between the screen of the terminal and the horizontal plane is not larger than the angle threshold value, sending second prompt information, wherein the second prompt information is used for prompting the adjustment of the placing state of the terminal.
8. The method of any one of claims 1 to 3, wherein said detecting a sitting posture of said target subject comprises:
if the absolute value of the slope of a connecting line between the target position of a first target key point and the target position of a second target key point in the plurality of key points is greater than a slope threshold, determining that the sitting posture of the target object is a target sitting posture;
the target position is the initial position or the corrected position, and the first target key point and the second target key point are key points of two symmetrical parts of the target object.
9. The method of any one of claims 1 to 3, wherein said detecting a sitting posture of said target subject comprises:
if the distance between the target position of a third target key point and the target position of a fourth target key point in the plurality of key points in the height direction of the target object is not greater than a distance threshold, determining that the sitting posture of the target object is a target sitting posture;
the target position is the initial position or the corrected position, and the third target key point and the fourth target key point are key points of two positions in the height direction of the target object.
10. A terminal, characterized in that the terminal comprises: the system comprises a camera, a processor and a gravity sensor;
the camera is used for responding to the sitting posture detection instruction and shooting the target object to obtain a shot image;
the processor is configured to:
determining initial positions of a plurality of key points of the target object in the shot image;
determining an inclination angle of a reference surface of the terminal relative to a horizontal plane based on the gravitational acceleration of the terminal detected by the gravity sensor, wherein the reference surface of the terminal is perpendicular to a screen of the terminal and is parallel to a first edge of the screen;
if the terminal is determined not to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the initial positions of the key points;
if the terminal is determined to be placed obliquely based on the inclination angle, detecting the sitting posture of the target object based on the corrected positions of the key points, wherein the corrected position of each key point is obtained by adjusting the initial position of the key point based on the inclination angle;
and if the sitting posture of the target object is the target sitting posture, sending first prompt information, wherein the first prompt information is used for prompting the target object to adjust the sitting posture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210209655.4A CN114596633A (en) | 2022-03-04 | 2022-03-04 | Sitting posture detection method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210209655.4A CN114596633A (en) | 2022-03-04 | 2022-03-04 | Sitting posture detection method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114596633A true CN114596633A (en) | 2022-06-07 |
Family
ID=81808021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210209655.4A Pending CN114596633A (en) | 2022-03-04 | 2022-03-04 | Sitting posture detection method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114596633A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972824A (en) * | 2022-06-24 | 2022-08-30 | 小米汽车科技有限公司 | Rod detection method and device, vehicle and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008204384A (en) * | 2007-02-22 | 2008-09-04 | Canon Inc | Image pickup device, object detection method and posture parameter calculation method |
CN101841603A (en) * | 2010-05-12 | 2010-09-22 | 中兴通讯股份有限公司 | Method for implementing automatic adjustment of photo direction and mobile terminal |
CN106781327A (en) * | 2017-03-09 | 2017-05-31 | 广东小天才科技有限公司 | Sitting posture correction method and mobile terminal |
CN111432074A (en) * | 2020-04-21 | 2020-07-17 | 青岛联合创智科技有限公司 | Method for assisting mobile phone user in acquiring picture information |
CN112712053A (en) * | 2021-01-14 | 2021-04-27 | 深圳数联天下智能科技有限公司 | Sitting posture information generation method and device, terminal equipment and storage medium |
CN113887496A (en) * | 2021-10-21 | 2022-01-04 | 广州小鹏自动驾驶科技有限公司 | Human body posture expression method and device |
-
2022
- 2022-03-04 CN CN202210209655.4A patent/CN114596633A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008204384A (en) * | 2007-02-22 | 2008-09-04 | Canon Inc | Image pickup device, object detection method and posture parameter calculation method |
CN101841603A (en) * | 2010-05-12 | 2010-09-22 | 中兴通讯股份有限公司 | Method for implementing automatic adjustment of photo direction and mobile terminal |
CN106781327A (en) * | 2017-03-09 | 2017-05-31 | 广东小天才科技有限公司 | Sitting posture correction method and mobile terminal |
CN111432074A (en) * | 2020-04-21 | 2020-07-17 | 青岛联合创智科技有限公司 | Method for assisting mobile phone user in acquiring picture information |
CN112712053A (en) * | 2021-01-14 | 2021-04-27 | 深圳数联天下智能科技有限公司 | Sitting posture information generation method and device, terminal equipment and storage medium |
CN113887496A (en) * | 2021-10-21 | 2022-01-04 | 广州小鹏自动驾驶科技有限公司 | Human body posture expression method and device |
Non-Patent Citations (1)
Title |
---|
郭立祺;林涛;陈峰;叶炳旭;陈淑莹;王新家;陈耀文;: "坐姿监测信息采集系统的设计", 中国体视学与图像分析, no. 01, 25 March 2020 (2020-03-25) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972824A (en) * | 2022-06-24 | 2022-08-30 | 小米汽车科技有限公司 | Rod detection method and device, vehicle and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220342850A1 (en) | Data transmission method and related device | |
WO2021017889A1 (en) | Display method of video call appliced to electronic device and related apparatus | |
CN110839096B (en) | Touch method of equipment with folding screen and folding screen equipment | |
CN114710574A (en) | Display method and related device | |
KR102558615B1 (en) | Method and electronic device for presenting a video on an electronic device when there is an incoming call | |
WO2021037223A1 (en) | Touch control method and electronic device | |
CN114217699A (en) | Method for detecting pen point direction of stylus pen, electronic equipment and stylus pen | |
CN115039378A (en) | Audio output method and terminal equipment | |
CN114844984B (en) | Notification message reminding method and electronic equipment | |
CN113688019B (en) | Response time duration detection method and device | |
WO2024016564A1 (en) | Two-dimensional code recognition method, electronic device, and storage medium | |
CN108734662B (en) | Method and device for displaying icons | |
CN110673944A (en) | Method and device for executing task | |
US20230236714A1 (en) | Cross-Device Desktop Management Method, First Electronic Device, and Second Electronic Device | |
CN114596633A (en) | Sitting posture detection method and terminal | |
CN111273849A (en) | Communication terminal and screen unlocking method | |
CN113253905B (en) | Touch method based on multi-finger operation and intelligent terminal | |
CN112992082B (en) | Electronic equipment and refreshing method of electronic ink screen thereof | |
CN114449171A (en) | Method for controlling camera, terminal device, storage medium and program product | |
CN111163220B (en) | Display method, communication terminal and computer storage medium | |
CN111787157A (en) | Mobile terminal and operation response method thereof | |
CN111190751A (en) | Task processing method and device based on song list, computer equipment and storage medium | |
CN113255644B (en) | Display device and image recognition method thereof | |
CN111142648B (en) | Data processing method and intelligent terminal | |
CN116708647B (en) | Notification message reply method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |