CN112712053A - Sitting posture information generation method and device, terminal equipment and storage medium - Google Patents

Sitting posture information generation method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112712053A
CN112712053A CN202110047353.7A CN202110047353A CN112712053A CN 112712053 A CN112712053 A CN 112712053A CN 202110047353 A CN202110047353 A CN 202110047353A CN 112712053 A CN112712053 A CN 112712053A
Authority
CN
China
Prior art keywords
sitting posture
image
index
user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110047353.7A
Other languages
Chinese (zh)
Inventor
徐强
胡晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202110047353.7A priority Critical patent/CN112712053A/en
Publication of CN112712053A publication Critical patent/CN112712053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The application is suitable for the field of machine vision, and provides a sitting posture information generation method and device, wherein the method comprises the following steps: acquiring a first sitting posture image of a user through a first acquisition component, and acquiring a second sitting posture image of the user through a second acquisition component; determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index; determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image; generating sitting posture information of the user based on the target sitting posture type and the index value. According to the method and the device, a characteristic index is configured for each sitting posture type, after the sitting posture type is identified, a specific numerical value of the characteristic index is determined according to an image obtained by the two cameras, and the specific numerical value of the characteristic index can be used for describing the specific sitting posture condition of a user more clearly, so that the user can be guided to adjust the sitting posture conveniently.

Description

Sitting posture information generation method and device, terminal equipment and storage medium
Technical Field
The application belongs to the field of machine vision, and particularly relates to a sitting posture information generation method and device, terminal equipment and a storage medium.
Background
Along with the progress of times, the education cause becomes more and more intelligent, and various intelligent education products also appear. When people study, most of the spirits are invested to study, the people easily ignore the wrong sitting postures, and the problems of health and low learning efficiency are caused in the past. Therefore, an intelligent product for detecting the sitting posture is needed, which detects whether the user's sitting posture is abnormal, and further adjusts the user's sitting posture.
In the prior art, a sitting posture identification model is generally constructed by detecting key points of a human body to identify the sitting posture of a user.
Disclosure of Invention
The embodiment of the application provides a sitting posture information generation method and device, terminal equipment and a storage medium, a characteristic index can be configured for each sitting posture type, after the sitting posture type is identified, specific numerical values of the characteristic index are determined according to images obtained by two acquisition components, and the specific numerical values of the characteristic index can be used for describing specific sitting posture conditions of a user more clearly, so that the user can be guided to adjust the sitting posture conveniently, and the problems that in the prior art, the sitting posture detection method can only distinguish the sitting posture type, and the specific sitting posture conditions are described not clearly are solved.
In a first aspect, an embodiment of the present application provides a method for generating sitting posture information, including: acquiring a first sitting posture image of a user through a first acquisition component, and acquiring a second sitting posture image of the user through a second acquisition component; determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index; determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image; generating sitting posture information of the user based on the target sitting posture type and the index value.
In a second aspect, an embodiment of the present application provides a sitting posture information generating apparatus, including: the acquisition module is used for acquiring a first sitting posture image of a user through the first acquisition component and acquiring a second sitting posture image of the user through the second acquisition component; a target sitting posture type determination module for determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index; the characteristic index determining module is used for determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image; and the sitting posture information generating module is used for generating the sitting posture information of the user based on the target sitting posture type and the index value.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the first aspects described above.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
compared with the prior art, the method provided by the application can be used for configuring a characteristic index for each sitting posture type, after the sitting posture type is identified, the specific numerical value of the characteristic index is determined according to the images acquired by the two acquisition parts, and the specific numerical value of the characteristic index can be used for more clearly describing the specific sitting posture condition of a user; the user sitting posture information generated according to the specific numerical value of the characteristic index is convenient for guiding the user to adjust the sitting posture, and the problem that in the prior art, the sitting posture detection method can only judge the type of the sitting posture and is not clear enough in describing specific sitting posture conditions is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a sitting posture information generating method provided in a first embodiment of the present application;
fig. 2 is a flowchart of an implementation of a sitting posture information generating method provided in the second embodiment of the present application;
FIG. 3 is a schematic diagram of a feature index provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application scenario provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation of a sitting posture information generating method provided in the third embodiment of the present application;
fig. 6 is a schematic diagram of an implementation of a sitting posture information generating method provided in the fourth embodiment of the present application;
FIG. 7 is a schematic view of a rotation vector provided in a fourth embodiment of the present application;
fig. 8 is a logic flow diagram of a sitting posture information generating method provided in a fifth embodiment of the present application;
fig. 9 is a flowchart of an implementation of a sitting posture information generating method provided in the sixth embodiment of the present application;
fig. 10 is a schematic structural diagram of a sitting posture information generating apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the main execution body of the flow is a terminal device. The terminal devices include but are not limited to: the device comprises a server, a computer, a smart phone, a tablet computer and the like, and can execute the sitting posture information generation method provided by the application. Optionally, the terminal device is arranged on a desk of the user, and can directly acquire the first sitting posture image and the second sitting posture image of the user. Fig. 1 shows a flowchart of an implementation of a method for generating sitting posture information according to a first embodiment of the present application, which is detailed as follows:
in S101, a first sitting posture image of the user is acquired by the first acquisition part, and a second sitting posture image of the user is acquired by the second acquisition part.
In this embodiment, the positions of the first collecting member and the second collecting member are different. Generally, a first sitting posture image and a second sitting posture image of a user are respectively obtained through a first camera and a second camera which are placed at different positions; optionally, the first acquisition component and the second acquisition component have the same equipment specification, so as to ensure that the first sitting posture image and the second sitting posture image have the same size, and avoid errors in the subsequent process of determining the index value corresponding to the characteristic index; for example, the first and second capturing components may be two cameras in the same binocular camera, so that the first and second capturing components are set before the generation method of the present embodiment is performed. For example, the first camera and the second camera may be disposed on a table; optionally, the shooting angles of the first camera and the second camera are the same (i.e. the shooting center lines of the two cameras are parallel to each other on the space coordinate system, the shooting center line of the camera can refer to a horizontal dotted line AA' of the camera a shown in fig. 4, i.e. a straight line parallel to the shooting direction of the camera and passing through the optical center of the camera), and face the chair direction of the user, so as to obtain a first sitting posture image and a second sitting posture image about the sitting posture of the user; optionally, referring to fig. 4, fig. 4 shows an application scenario provided in an embodiment of the present application, where a is the first camera and B is the second camera; the first camera and the second camera can be arranged on a desk lamp with an illumination function so as to integrate functions and save desktop space; optionally, the first camera and the second camera are disposed on a support column of the table lamp, that is, the first camera and the second camera are located on the same vertical line, so that when the index value of the characteristic index is subsequently calculated, only the heights of the first camera and the second camera need to be considered, and the position in the horizontal direction does not need to be considered, so as to simplify the step of subsequently determining the index value corresponding to the characteristic index.
In a possible implementation manner, sitting posture images (including the first sitting posture image and the second sitting posture image) corresponding to each acquisition cycle of the user are acquired according to a preset acquisition cycle, so as to realize real-time monitoring of whether the sitting posture of the user is standard. Illustratively, with one second as the acquisition period, a sitting posture image corresponding to the second of the user is acquired every second.
It should be understood that the terminal device may be the desk lamp shown in fig. 4, and includes the first collecting component and the second collecting component, and the terminal device acquires the first sitting posture image and the second sitting posture image through the first collecting component and the second collecting component; the terminal device can also be other devices except the desk lamp, and the terminal device receives the first sitting posture image sent by the first acquisition component and the second sitting posture image sent by the second acquisition component by establishing communication connection with the first acquisition component and the second acquisition component.
In S102, a target sitting posture type of the user is determined based on the first sitting posture image and/or the second sitting posture image.
In this embodiment, the target sitting posture type is associated with at least one characteristic index. The target sitting posture type refers to a sitting posture type of the user at the acquisition time of the first sitting posture image and/or the second sitting posture image, such as a sitting posture type of lowering head, raising head, bending down, leaning left on head, leaning right on head, leaning left on body, leaning right on body, leaning up left shoulder and leaning up right shoulder; the characteristic index is a characteristic identifier of the target sitting posture type to quantify the target sitting posture type, namely the characteristic index is used for describing the abnormal degree of the target sitting posture type; the characteristic indicator is typically a keypoint or a combination of at least two keypoints. For example, if the target sitting posture type is low head, the characteristic index may be an eyebrow key point to identify the head of the user; the height of the eyebrow key point is used for quantifying the target sitting posture type of the head-lowering, and is used for representing the degree (namely abnormal degree) of head-lowering of the user, namely the degree of severity of head-lowering of the user, so as to generate sitting posture information later. It should be understood that each target sitting posture type is preconfigured with a corresponding characteristic index.
Taking the first sitting posture image as an example, in a possible implementation manner, the determining the target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image may specifically be: and identifying a key point set in the first sitting posture image, and identifying the target sitting posture state according to the distribution condition of the key point set in the first sitting posture image and the sitting posture detection model. The distribution refers to the relevance between the key points in the key point set and the position information in the first sitting posture image. It should be understood that, the above-mentioned identifying the key point set in the first sitting posture image may specifically be identifying the human key points in the first sitting posture image through an openpos human key point identification model. The sitting posture detection model is obtained by training according to the key point set of each training image in the training image set. Illustratively, the set of keypoints includes eight keypoints: a left eye key point, a right eye key point, a nose key point, a left ear key point, a right ear key point, a left shoulder key point, a right shoulder key point, and a neck key point; when the sitting posture detection model is trained, the key point set of each training image is used as input, the target sitting posture type corresponding to each training image (the target sitting posture type corresponding to each training image is configured in advance) is used as output, and the parameters of the sitting posture detection model are continuously adjusted until the output accuracy of the sitting posture detection model reaches a preset percentage. It should be understood that when determining the output accuracy of the sitting posture detection model after training, a part of the training images in the training image set can be selected as the verification image set, and after each training period, the output accuracy of the sitting posture detection model after training is determined according to the verification image set.
Optionally, if the terminal device determines the target coordinate type according to the first sitting posture image and the second sitting posture image, the terminal device may determine a first sitting posture type corresponding to the first sitting posture image and a second sitting posture type corresponding to the second sitting posture image, respectively, and determine whether the first sitting posture type and the second sitting posture type are the same; if the first target sitting posture type is the same as the second target sitting posture type, outputting the target sitting posture type; if the first target sitting posture type is different from the second target sitting posture type, generating sitting posture type identification error reporting information, and/or acquiring sitting posture images of the user again through the first acquisition component and the second acquisition component.
In S103, an index value corresponding to the feature index is determined based on the first sitting posture image and the second sitting posture image.
In this embodiment, the index value refers to a parameter (including a specific numerical value) of the feature index for quantifying the target sitting posture type, for example, if the target sitting posture type is a low head, the feature index is an eyebrow key point, and the target sitting posture type is used for describing an abnormal situation (such as a low head) of the user sitting posture in the vertical direction, the index value of the feature index may be a parameter capable of representing a position change in the vertical direction, that is, a height of the eyebrow key point; illustratively, if the target sitting posture type is a left-leaning body, the characteristic index is a left shoulder key point and a right shoulder key point, the target sitting posture type is used for describing an abnormal situation of a user sitting posture in a horizontal direction (for example, the body is inclined to the left), the index value of the characteristic index should also be a parameter in the horizontal direction, taking a spatial straight line where the first acquisition component and the second acquisition component are located together as a reference, and the parameter is a left shoulder horizontal distance of the left shoulder key point relative to the spatial straight line and a right shoulder horizontal distance of the right shoulder key point relative to the spatial straight line, or a distance difference between the left shoulder horizontal distance and the right shoulder horizontal distance.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a characteristic index provided by an embodiment of the present application, and by way of example, fig. 3 may be the first sitting posture image or the second sitting posture image, where O is the image center of fig. 3; the feature index is a feature key point in the first sitting posture image or the second sitting posture image, and referring to fig. 3, the feature key point includes: e1 is a key point for the left eye, e2 is a key point for the right eye, m is a key point for the eyebrow, n is a key point for the nose, s1 is a key point for the left shoulder, s2 is a key point for the right shoulder, and j is a key point for the neck. For example, if the target sitting posture type is head left deviation, the feature index includes the left-eye key point and the right-eye key point, and the index value of the feature index refers to a height difference between the left-eye key point and the right-eye key point. For example, if the target sitting posture type is a left shoulder rising, the feature index includes the left shoulder key point and the neck key point, and the index value of the feature index refers to a height difference between the left shoulder key point and the neck key point.
In a possible implementation manner, the determining an index value corresponding to the feature index based on the first sitting posture image and the second sitting posture image may specifically be: respectively identifying the characteristic indexes in the first sitting posture image and the second sitting posture image, and determining a first orientation of the characteristic indexes relative to the first acquisition component and a second orientation of the second acquisition component on a space coordinate system according to image coordinates of the characteristic indexes in the first sitting posture image and the second sitting posture image; determining a first acquisition position of the first acquisition component and a second acquisition position of the second acquisition component at the first acquisition position on the space coordinate system; determining a first straight line of the characteristic index on the space coordinate system based on the first acquisition position and the first orientation, wherein the first straight line is used for representing the possible positions of the characteristic index on the space coordinate system which can be determined according to the first sitting posture image; similarly, a second straight line of the characteristic index on the space coordinate system is determined, and the intersection point of the first straight line and the second straight line on the space coordinate system is identified as the space position of the characteristic index; and determining an index value (including a specific numerical value) corresponding to the characteristic index based on the spatial position of the characteristic index. It will be appreciated that the first acquisition member and the second acquisition member are in different positions, and therefore the first line and the second line must not be parallel; the feature index is unique on the spatial coordinate system, so that there must be an intersection point between the first line and the second line.
It should be understood that, if the first and second acquisition positions are position information relative to a desktop, the index value of the feature index is also a value relative to the desktop, for example, the height of the eyebrow center key point is the relative height of the eyebrow center key point relative to the desktop.
In S104, based on the target sitting posture type and the index value, sitting posture information of the user is generated.
In the embodiment, the sitting posture information is used for describing the sitting posture of the user, and compared with the prior art, the sitting posture information generated in the embodiment can more clearly describe the specific sitting posture of the user, including the target sitting posture type of the user and the index value quantifying the target sitting posture type. The sitting posture information can be used for monitoring the current sitting posture of the user in real time and guiding the user to specifically adjust the sitting posture, and specifically, the sitting posture information can be broadcasted to remind the user to adjust the sitting posture and serve as a specific reference basis for the user to adjust the sitting posture so as to guide the user to accurately adjust the sitting posture.
In a possible implementation manner, the generating the sitting posture information of the user based on the target sitting posture type and the index value may specifically be: and packaging the target sitting posture type and the index value of the characteristic index to generate the sitting posture information. Illustratively, the target sitting posture type is a low head, the characteristic index is an eyebrow center key point, and the index value is a relative height of the eyebrow center key point with respect to a desktop (for example, a specific value is N), then the generated sitting posture information includes: and the index value corresponding to the characteristic index of the ' head-lowering ' sitting posture and the height of the eyebrow center from the desktop, which is associated with the ' head-lowering ' sitting posture, is N '.
In this embodiment, at least one characteristic index may be configured for each different sitting posture type, after the target sitting posture type is identified, an index value (including a specific numerical value) of the characteristic index is determined according to the first sitting posture image and the second sitting posture image acquired by the two acquisition components with different acquisition positions, and the index value of the characteristic index may be used to describe a specific sitting posture condition of a user more clearly, and the user sitting posture is displayed in a digitized form, so that the user can know the user sitting posture more intuitively, and can conveniently guide the user to accurately adjust the sitting posture according to the sitting posture information.
Fig. 2 shows a flowchart of an implementation of a sitting posture information generation method provided in the second embodiment of the present application. Referring to fig. 2, in comparison with the embodiment shown in fig. 1, the method for generating sitting posture information S103 provided in this embodiment includes S201 to S204, which are detailed as follows:
in this embodiment, the feature index is a feature key point existing in both the first sitting posture image and the second sitting posture image. The description of the feature key point can refer to the description of S103, and is not repeated here. It should be noted that, in the present embodiment, the feature index is a feature key point existing in both the first sitting posture image and the second sitting posture image, but not a feature key point existing in the first sitting posture image or the second sitting posture image.
Further, the determining an index value corresponding to the feature index based on the first sitting posture image and the second sitting posture image includes:
in S201, a first acquisition position of the first acquisition component and a second acquisition position of the second acquisition component are acquired.
In this embodiment, the first collecting component and the second collecting component are preset, that is, the first collecting position and the second collecting position are preset. In a possible implementation manner, the obtaining of the first collecting position of the first collecting component and the obtaining of the second collecting position of the second collecting component may specifically be querying the first collecting position and the second collecting position which are stored in advance; the first acquisition position and the second acquisition position may also be determined based on a user operation, and specifically, the terminal device may receive the first acquisition position and the second acquisition position input by a user; the first collecting position and the second collecting position may also be obtained according to a distance measuring sensor, specifically, the first collecting component is used for explaining, a radio frequency component is installed in the first collecting component, and the distance measuring sensor obtains an identification signal sent by the radio frequency component, so as to determine a relative position of the first collecting component with respect to the distance measuring sensor as the first collecting position.
In S202, a first deviation angle corresponding to the characteristic indicator within the first sitting posture image is determined based on the first acquisition position.
In this embodiment, the first deviation angle refers to a deviation angle of the characteristic index with respect to the shooting center line of the first collection component, that is, an included angle between a connection line between the characteristic index and the optical center of the first collection component and the shooting center line of the first collection component on a spatial coordinate system.
Specifically, referring to fig. 4, in an application scenario provided by an embodiment of the present application, the characteristic index is an eyebrow center key point, where a is the first acquisition component, B is the second acquisition component, C is the eyebrow center key point, a straight line AA 'is a shooting center line of the first acquisition component, and a straight line BB' is a shooting center line of the second acquisition component. The shooting directions of the first acquisition component and the second acquisition component are parallel and horizontal, and the first acquisition component and the second acquisition component are arranged on a support column of the table lamp which is vertically arranged, namely the first acquisition component and the second acquisition component are positioned on the same vertical line; the first deviation angle is shown as angle Q1.
The determining of the first deviation angle corresponding to the characteristic indicator in the first sitting posture image based on the first collection position may specifically be: the feature indicator is marked in the first sitting posture image, and the first deviation angle is determined according to the relative position relation between the feature indicator and the image center of the first sitting posture image, in particular, according to the first image coordinate of the feature indicator in the first sitting posture image and the equipment parameter of the first acquisition component.
Referring specifically to fig. 3, fig. 3 shows a characteristic index diagram provided by an embodiment of the present application, and exemplarily, fig. 3 is illustrated as a first sitting posture image, where a point O is an image center, a point M is an eyebrow center key point, the point O and the point M are located on the same vertical line, the vertical line is a straight line perpendicular to an upper edge or a lower edge of the first sitting posture image in the first sitting posture image, that is, the point O, the point M, an upper edge center and a lower edge center of the first sitting posture image are all on the same straight line, then a distance ratio of a distance between the point M and the point O to a height of the first sitting posture image is calculated, and the height of the first sitting posture image refers to a distance between the upper edge and the lower edge of the first sitting posture image (i.e., fig. 3); determining a view field angle in a vertical direction (i.e. an included angle between the uppermost and lowermost parts of a shot picture corresponding to the first acquisition component in the vertical direction on a spatial coordinate system) according to the device parameters of the first acquisition component, i.e. an angle formed by three points, namely, the center of the upper edge of fig. 3, the center of the lower edge of fig. 3 and the optical center of the first acquisition component, on the spatial coordinate system; and determining the first deviation angle according to the distance ratio and the view field angle. It should be understood that the first sitting posture image shown in fig. 3 can be obtained only by adjusting the shooting angle of the first collecting component for collecting the first sitting posture image so that the shooting center line of the first collecting component passes through the spatial vertical line (i.e. the line where the eyebrow points to the center of the earth) where the key point of the eyebrow center is located.
It should be understood that for the first sitting image obtained by the first acquisition component, each point in the first sitting image corresponds to a deviation angle, and the equipment parameters of the first acquisition component determine the deviation angle of each point in the first sitting image, i.e. the deviation angle of each point can be determined according to the relative position of the point in the sitting image. Illustratively, the deviation angle of point O shown in FIG. 3 is zero; if the angle of the field of view of the first collecting member in the vertical direction is 160 °, the deviation angle corresponding to the center of the upper edge in fig. 3 is 80 °, and the deviation angle corresponding to the center of the lower edge in fig. 3 is also 80 °. For example, if the point O and the point M shown in fig. 3 are not on the same vertical line, the same straight line where the point O and the point M are located is taken as a recognition straight line, an intersection point of the recognition straight line and the upper edge of the first sitting posture image is a point t, and an intersection point of the lower edge of the recognition straight line and the upper edge of the first sitting posture image is a point d, and a view field angle of the line segment dt in the space is determined according to the equipment parameters of the first acquisition component; according to the proportion of the line segment OM occupying the line segment dt and the view field angle, a first deviation angle of the point M is determined.
It should be understood that the device parameters of the first acquisition component may determine a parametric formula describing the deviation angle of each point correspondence within the first sitting posture image, which may be specifically, for example:
Figure BDA0002897781000000101
wherein β is a first deviation angle, χ is a ratio of the line segment OM to the line segment dt, and α is a spatial view angle of the line segment dt.
In S203, a second deviation angle corresponding to the characteristic indicator in the second sitting posture image is determined based on the second acquisition position.
In this embodiment, the determining of the second deviation angle corresponding to the characteristic indicator in the second sitting posture image based on the second collecting position specifically refers to the related description of S202, and is not repeated herein; note that, referring to fig. 4, the second deviation angle is an angle Q2 shown in fig. 4.
In S204, an index value corresponding to the feature index is calculated based on the first collection position, the second collection position, the first deviation angle, and the second deviation angle.
Referring to fig. 4, the above calculating the index value corresponding to the characteristic index based on the first collecting position, the second collecting position, the first deviation angle, and the second deviation angle may specifically be: determining a first straight line (i.e. the straight line a shown in fig. 4) corresponding to the characteristic indicator according to the first acquisition position and the first deviation angle, where the first straight line is used to represent a position where the characteristic indicator that can be determined according to the first sitting posture image may appear on a spatial coordinate system; similarly, a second straight line (i.e. the straight line b shown in fig. 4) corresponding to the feature indicator on the spatial coordinate system is determined, and an intersection point of the first straight line and the second straight line on the spatial coordinate system is identified as the spatial position of the feature indicator; and determining an index value corresponding to the characteristic index based on the spatial position of the characteristic index.
Further, the method for generating sitting posture information S204 provided in this embodiment includes steps S2041 to S2043, which are detailed as follows:
the calculating an index value corresponding to the characteristic index based on the first collection position, the second collection position, the first deviation angle and the second deviation angle includes:
in S2041, the first collecting position and the second collecting position are marked and connected in a preset spatial coordinate system, so as to obtain a reference connecting line.
In this embodiment, referring to fig. 4, the reference connecting line is a connecting line between corresponding points of the first and second collecting units on the spatial coordinate system, that is, the connecting line AB shown in fig. 4. Referring to fig. 4, the spatial coordinate system uses a contact point between the graphic table lamp and the graphic table top as a center, and uses the support pillar of the graphic table lamp as a first coordinate axis, so that the reference connection line AB can be obtained according to the first collecting position a and the second collecting position B.
In S2042, a spatial triangle is constructed in the spatial coordinate system according to the first deviation angle and the second deviation angle corresponding to the reference connecting line and the characteristic index.
In this embodiment, the spatial triangle takes the feature index as a vertex and the reference connecting line as a side, and the reference connecting line is opposite to the feature index in the spatial triangle. Referring to fig. 4, a ray a is generated based on the first deviation angle with one end of the reference connecting line as a starting point, i.e., point a, and a ray B is generated based on the second deviation angle with the other end of the reference connecting line as a starting point, i.e., point B, where the ray a, the ray B and the reference connecting line enclose a spatial triangle ABC, where a vertex C of the spatial triangle ABC (i.e., an intersection of the ray a and the ray B) is the feature index (i.e., the key point of the eyebrow).
In S2043, an index value corresponding to the feature index is determined according to the feature values of the space triangle in a plurality of preset dimensions.
Illustratively, referring to fig. 4, fig. 4 is a schematic diagram of an application scenario in which the target sitting posture type is head-down or head-up, the characteristic index is an eyebrow center key point, and the index value corresponding to the characteristic index is a height of the eyebrow center key point relative to a desktop. Referring to fig. 4, the angle θ 1 in the illustration is calculated from a first deviation angle Q1 and the angle θ 2 in the illustration is calculated from a second deviation angle Q2, wherein, in particular, the first deviation angle Q1 makes a right angle with the angle θ 1 and the second deviation angle Q2 makes a right angle with the angle θ 2. It should be understood that if the aforementioned spatial triangle ABC is an obtuse triangle in which the angle θ 1 is an obtuse angle, the aforementioned calculation of the angle θ 1 in the diagram from the first deviation angle Q1 may specifically be a value of the first deviation angle Q1 plus 90 ° to the angle θ 1. It should be understood that regardless of whether angle θ 1 is obtuse or acute, for subsequent cot (θ)1) The calculation of (a) has no influence because of cot (90-Q)1)=cot(90°+Q1)。
In a possible implementation manner, the preset dimension may be in a spatial vertical dimension or a spatial horizontal dimension. The determining of the index value corresponding to the feature index according to the feature values of the space triangle in the plurality of preset dimensions may specifically be: calculating the verticality between the vertex C and the base AB of the space triangle according to the space mathematical relationThe distance Z, specifically, the equation AB ═ cot (θ) is enumerated1)*Z+cot(θ2) Z, wherein AB is the length of the base side AB, which can be obtained from the height difference between the first and second acquisition positions; knowing the AB, the angle θ 1, and the angle θ 2, solving to obtain the Z (spatial horizontal dimension); determining a base angle theta 2 of the space triangle and a height H (a space vertical dimension) of the point B relative to the desktop, where H can be obtained by a height difference between the first collecting position and the desktop, and calculating a height H of the key point of the eyebrow center relative to the desktop, specifically, H ═ H + Z × cot (theta)2)。
It will be appreciated that, with reference to fig. 4, the horizontal distance of the key points of the eyebrow center is Z; in another application scenario, the feature indicator is a combination of at least two feature key points existing in the first sitting posture image and the second sitting posture image at the same time, and illustratively, the target sitting posture type is head left turn, that is, the head of the user turns left by a certain angle; at this time, the characteristic index is a left-eye key point and a right-eye key point, and the index value of the characteristic index is a horizontal distance difference between the left-eye key point and the right-eye key point; because the horizontal distance of the left-eye key point is increased (i.e. away from the reference connection line) and the horizontal distance of the right-eye key point is decreased (i.e. close to the reference connection line) when the head of the user turns left (i.e. looks to the left), the horizontal distance difference can be used to describe the rotation degree of the head of the user; the specific implementation of calculating the horizontal distance between the key points of the left eye and the right eye can refer to the above description of calculating the horizontal distance between the key points of the eyebrow center, and is not described herein again.
It should be understood that the embodiments of the present application are applicable to various application scenarios, that is, the target sitting posture type may be various sitting posture types. For example, in addition to the sitting posture types such as head-lowering and head-left turning, the target sitting posture type may be head-left deviation (i.e. the head of the user tilts left while keeping the face direction unchanged), the vertical distance difference between the left-eye key point and the right-eye key point is increased (the vertical distance difference between the left-eye key point and the right-eye key point is zero in normal sitting posture), and the index value of the feature index refers to the vertical distance difference between the left-eye key point and the right-eye key point. For example, the target sitting posture type may be a left shoulder rising (i.e. the user performs a shoulder shrugging action on the left shoulder), and then the vertical distance difference between the left shoulder key point and the neck key point is increased (the vertical distance difference between the left shoulder key point and the neck key point is zero in a normal sitting posture), and the index value of the feature index indicates the vertical distance difference between the left shoulder key point and the neck key point. For example, the target sitting posture type may be a body left turn (i.e. the user body turns left at a certain angle), when the horizontal distance of the left shoulder key point is increased (i.e. away from the reference connection line), the horizontal distance of the right shoulder key point is decreased (i.e. close to the reference connection line), the horizontal distance difference may be used to describe the rotation degree of the user body, and the index value of the characteristic index refers to the horizontal distance difference between the left shoulder key point and the right shoulder key point.
In this embodiment, the index value corresponding to the characteristic index is determined through a spatial mathematical relationship, so as to generate the sitting posture information subsequently.
Fig. 5 is a schematic implementation diagram of a sitting posture information generating method provided in the third embodiment of the present application. Referring to fig. 5, in comparison with the embodiment shown in fig. 1, the method for generating sitting posture information S102 provided in this embodiment includes steps S1021 to S1022, which are detailed as follows:
further, the determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image comprises:
in S1021, the first sitting posture image and/or the second sitting posture image is imported into a key point recognition model, and a key point image marked with a plurality of preset key points is output.
In this embodiment, the keypoint identification model is used for identifying a preset keypoint in the first sitting posture image and/or the second sitting posture image; illustratively, the preset key points may be body key points, including body key points such as a left eye key point, a right eye key point, an eyebrow key point, a nose key point, a mouth key point, a left ear key point, a right ear key point, a chin key point, a left shoulder key point, a right shoulder key point, and a neck key point. Illustratively, the keypoint recognition model may be a trained openpos human keypoint recognition model.
Taking the first sitting posture image as an example for explanation, in a possible implementation manner, the importing the first sitting posture image and/or the second sitting posture image into a keypoint identification model, and outputting a keypoint image marked with a plurality of preset keypoints may specifically be: and marking each preset key point on the first sitting posture image, extracting, and outputting the key point image marked with a plurality of preset key points according to the position of each preset key point on the first sitting posture image.
In S1022, the key point image is imported into the sitting posture type recognition model, and the target sitting posture type is output.
In this embodiment, the sitting posture type recognition model is obtained by training based on a training image set corresponding to a plurality of preset candidate sitting posture types; the target sitting posture type is one of the preset candidate sitting posture types. The sitting posture type recognition model is used for determining the target sitting posture type according to the characteristic information of each preset key point in the key point image. Illustratively, training a sitting posture type recognition model based on a keras deep learning algorithm by taking a training image set corresponding to a plurality of preset candidate sitting posture types as input and taking a candidate sitting posture type corresponding to each training image in each training image set as output, wherein the sitting posture type recognition model is a classification model pre-built in a keras-based model library. It is to be understood that each candidate coordinate type is associated with at least one feature indicator.
In a possible implementation manner, the importing the key point image into a sitting posture type recognition model, and outputting the target sitting posture type may specifically be: and taking the characteristic information of each preset key point in the key point image as input, calculating based on the internal parameters of the sitting posture type recognition model, and outputting the target sitting posture type.
In this embodiment, the key point images of the first sitting posture image and/or the second sitting posture image are identified, that is, feature extraction related to preset key points is performed on the first sitting posture image, so that the first sitting posture image is simplified, and then the target sitting posture type is determined by introducing the sitting posture type identification model according to the key point images, so that the calculation amount for determining the target sitting posture type can be reduced, and the efficiency for determining the target sitting posture type is improved.
Fig. 6 shows an implementation schematic diagram of a sitting posture information generating method provided by a fourth embodiment of the present application. Referring to fig. 6, in comparison with the embodiment shown in fig. 5, the method S1022 for generating sitting posture information provided in this embodiment includes steps S601 to S605, which are detailed as follows:
further, the importing the key point image into a sitting posture type recognition model and outputting a target sitting posture type comprises:
in this embodiment, fig. 7 shows a schematic view of a rotation vector provided in a fourth embodiment of the present application, and the head rotation vector is taken as an example for explanation. Referring to fig. 7, the head rotation vector refers to a rotation vector between a head three-dimensional coordinate system (a coordinate system formed by an x ' axis, a y ' axis, and a z ' axis in the drawing) established based on the head orientation of the target object and a standard three-dimensional coordinate system (a coordinate system formed by an x axis, a y axis, and a z axis in the drawing) established based on the ground, the head three-dimensional coordinate system being the same as the center (point O in the drawing) of the standard three-dimensional coordinate system. It should be understood that the positive direction of the x-axis in the figure is the direction from the right ear to the left ear, the positive direction of the y-axis is the direction from the chin to the top of the head, and the positive direction of the z-axis is the direction from the back of the brain to the front nose.
Exemplarily, the head rotation vector is (u, v, w), the u is an angle value rotated by a standard three-dimensional coordinate system with an x-axis of the standard three-dimensional coordinate system as a rotation axis, the v is an angle value rotated by the standard three-dimensional coordinate system with a y-axis of the standard three-dimensional coordinate system as a rotation axis, and the w is an angle value rotated by the standard three-dimensional coordinate system with a z-axis of the standard three-dimensional coordinate system as a rotation axis; it should be understood that the standard three-dimensional coordinate system coincides with the head three-dimensional coordinate system after the three rotations.
In S601, a head rotation vector is determined based on face feature information in the keypoint image.
In this embodiment, the sitting posture type recognition model includes a head rotation vector recognition model. The head rotation vector recognition model is obtained by training based on a plurality of head rotation vector training images, and specifically, the head rotation vector recognition model is trained based on a keras deep learning algorithm by taking each head rotation vector training image as input and taking a head rotation vector corresponding to the head rotation vector training image as output.
In this embodiment, generally, the only necessary information in the keypoint image when determining the head rotation vector is the keypoint feature information located on the head of the user, that is, the face feature information. In a possible implementation manner, the determining a head rotation vector based on the face feature information in the keypoint image may specifically be: extracting the face feature information in the key point image, that is, integrating the feature information of each face key point on the head of the user in the key point image, where the face key points may specifically be: a left eye key point, a right eye key point, an eyebrow key point, a nose key point, a mouth key point, a left ear key point, a right ear key point, and a chin key point; and calculating based on the human face characteristic information and the internal parameters of the head rotation vector recognition model to obtain the head rotation vector.
In S602, a head pose is determined based on the head rotation vector.
In the present embodiment, in order to determine the head posture based on the head rotation vector, thresholds are set in advance for rotation angle values of the head rotation vector in the respective directions. In a possible implementation manner, the determining the head pose based on the head rotation vector may specifically be: determining sub-head postures corresponding to all directions respectively based on rotation angle values corresponding to all directions of the head rotation vector; identifying the set of sub-head poses corresponding to three directions as head poses. Exemplarily, the head rotation vector is (u, v, w), and a rotation angle value u corresponding to an x-axis of the head rotation vector is taken as an example to be described: setting two thresholds for the rotation angle values corresponding to the x-axis as u1 and u2, respectively, wherein u1 and u2 satisfy-90 < u1<0< u2< 90; when the U is in the interval [ -90, U1], determining that the sub-head posture U corresponding to the x-axis is head-up (specifically, may be represented by U-1); when the U is in a section [ U1, U2], determining that the sub-head posture U corresponding to the x-axis is normal (specifically, may be represented by U-0); when the U is in the interval [ U2, 90], the sub-head posture U corresponding to the x-axis is determined to be head-down (specifically, may be represented by U ═ 1). It should be understood that, for the rotation angle value V corresponding to the y-axis of the head rotation vector, two thresholds V1 and V2 are set and the sub-head posture V is determined, and for the rotation angle value W corresponding to the z-axis of the head rotation vector, two thresholds W1 and W2 are set and the sub-head posture W is determined, which are referred to above steps and will not be described herein again. Up to this point, the set of sub-head poses corresponding to three directions is recognized as a head pose, i.e. the head pose is specifically (U, V, W). It should be understood that the specific values of u1, v1 and w1 may be the same, the specific values of k1, u2, v2 and w2 may also be the same, the specific values of k2, and the values of the sub-header states are shown in the following table:
Figure BDA0002897781000000151
it should be understood that there are 3 values for each of the 3 sub-head poses of the head pose, i.e., there are 27 values for the head pose.
In S603, a human body rotation vector is determined based on the human body feature information in the keypoint image.
In this embodiment, the determining the human body rotation vector based on the human body feature information in the keypoint image may specifically refer to the related description of S601, which is not described herein again.
In S604, a human body posture is determined based on the human body rotation vector.
In this embodiment, the determining the human body posture based on the human body rotation vector specifically refers to the related description of S602, which is not described herein again. It should be noted that, because the human body feature information in the key point image only includes the left shoulder key point, the right shoulder key point, and the neck key point, the three points can only determine one plane in space, that is, the human body posture is distinguished according to the rotation angle values on the two coordinate axes. In a possible implementation manner, the determining the human body posture based on the human body rotation vector may specifically be: respectively determining the sub-human body postures corresponding to the two directions based on the rotation angle values corresponding to the two directions of the human body rotation vector; and identifying the set of the sub-human postures corresponding to the two directions as human postures. Referring to the above description of S602, each possible sub-human body posture is shown in the following table:
Figure BDA0002897781000000152
it should be understood that there are 3 values for each of the 2 sub-body postures of the body posture, i.e., there are 9 values for the body posture.
In S605, a target sitting posture type is determined and output according to the head posture and the body posture.
In this embodiment, the target sitting posture type is a set of 5 dimensions including the 3 sub-head postures and the 2 sub-body postures, and each dimension has 3 values, that is, the target sitting posture has 3 values to the power of 5, that is, 243 values. In each dimension, if the value of the dimension is not normal, the dimension corresponds to a characteristic index, and each characteristic index is associated with an index value; reference may be made to the related description of the second embodiment, which is not repeated herein. .
In this embodiment, the head pose and the human body pose are determined by the head rotation vector and the human body rotation vector, and the target coordinate type is further determined, so that different coordinate types can be distinguished, and a corresponding characteristic index and an index value corresponding to the characteristic index are configured for each different coordinate type, so as to quantize the different coordinate types.
Fig. 8 is a logic flow diagram illustrating a sitting posture information generating method provided in a fifth embodiment of the present application. Referring to fig. 8, in comparison with the embodiment shown in fig. 1, the method for generating sitting posture information S104 provided in this embodiment includes S801, which is detailed as follows:
further, the generating the sitting posture information of the user based on the target sitting posture type and the index value comprises:
in S801, the index value is compared with an index threshold corresponding to the target sitting posture type, and the sitting posture information of the user is generated based on the comparison result.
Referring to fig. 8, the index value is compared with the index threshold, and if the index value is smaller than the index threshold, the characteristic index is identified as a normal index; if the index value is greater than or equal to the index threshold value, the characteristic index is identified as an abnormal index. The normal index or the abnormal index can be used as a part of the sitting posture information, namely, the normal index or the abnormal index is packaged into the sitting posture information, so as to describe the specific sitting posture of the user in more detail.
In this embodiment, the index threshold may be preset based on individual sitting posture habits, because each person has a different definition of the correct sitting posture, and the index threshold may satisfy the user's personalized definition of each sitting posture type. The index threshold may also be set by a supervisor of the user, such as a student's supervisor-a parent, so that the supervisor defines a set of criteria to strictly require the user.
Fig. 9 shows a flowchart of an implementation of a sitting posture information generating method provided in the sixth embodiment of the present application. Referring to fig. 9, with respect to any of the above embodiments, the method for generating sitting posture information provided by this embodiment includes S901 and/or S902, which are detailed as follows:
further, after generating the sitting posture information of the user based on the target sitting posture type and the index value, the method further includes:
in S901, the sitting posture information is sent to a user terminal to instruct the user terminal to generate a sitting posture report based on the sitting posture information.
In this embodiment, the sitting posture information is sent to the user terminal, so that the user terminal displays the sitting posture information, that is, the user is informed of the sitting posture information, so that the user can know the specific sitting posture. Specifically, the user terminal is instructed to generate a sitting posture report based on the sitting posture information, and specifically, data in the sitting posture information is filled in a corresponding position in a page of the sitting posture report.
It will be appreciated that a supervisory terminal of the user's supervisor may be substituted for the user terminal described above to enable the supervisor to supervise the user's particular sitting posture.
In S902, the sitting posture information is sent to the intelligent terminal to instruct the intelligent terminal to adjust based on the sitting posture information, so as to adjust the sitting posture of the user.
In this embodiment, the intelligent terminal may be an intelligent desk and/or an intelligent chair. In a possible implementation manner, if the sitting posture information indicates that the sitting posture type of the user is a left-leaning body, the sending of the sitting posture information to the intelligent terminal to instruct the intelligent terminal to adjust the sitting posture of the user based on the sitting posture information may specifically be: and sending the index value related to the left inclination of the body to the intelligent chair, wherein the intelligent chair determines the adjustment amplitude according to the index value related to the left inclination of the body and automatically rotates the adjustment amplitude to the right so as to enable the body of the user to be straightened.
In this embodiment, the obtained sitting posture information is used as a reference to instruct the user terminal to generate a sitting posture report based on the sitting posture information, and/or instruct the intelligent terminal to adjust the sitting posture of the user based on the sitting posture information, so as to be applied to various different scenes to meet the needs of the user.
Fig. 10 is a schematic structural diagram of a sitting posture information generating apparatus provided in an embodiment of the present application, corresponding to the method described in the foregoing embodiment, and only the parts related to the embodiment of the present application are shown for convenience of description.
Referring to fig. 10, the generating device includes: the acquisition module is used for acquiring a first sitting posture image of a user through the first acquisition component and acquiring a second sitting posture image of the user through the second acquisition component; a target sitting posture type determination module for determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index; the characteristic index determining module is used for determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image; and the sitting posture information generating module is used for generating the sitting posture information of the user based on the target sitting posture type and the index value.
Optionally, the characteristic index determining module includes: the acquisition position acquisition module is used for acquiring a first acquisition position of the first acquisition component and acquiring a second acquisition position of the second acquisition component; a first deviation angle determination module, configured to determine a first deviation angle corresponding to the characteristic indicator in the first sitting posture image based on the first acquisition position; a second deviation angle determination module, configured to determine, based on the second acquisition position, a second deviation angle corresponding to the characteristic indicator in the second sitting posture diagram; and the index value calculation module is used for calculating the index value corresponding to the characteristic index based on the first acquisition position, the second acquisition position, the first deviation angle and the second deviation angle.
Optionally, the index value calculation module includes: the reference connecting line determining module is used for marking and connecting the first acquisition position and the second acquisition position in a preset space coordinate system to obtain a reference connecting line; the spatial triangle construction module is used for constructing a spatial triangle in the spatial coordinate system according to the first deviation angle and the second deviation angle corresponding to the reference connecting line and the characteristic index; the characteristic index corresponds to the vertex of the space triangle; and the index value determining module is used for determining the index values corresponding to the characteristic indexes according to the characteristic values of the space triangle in a plurality of preset dimensions.
Optionally, the target sitting posture type determining module includes: the key point identification module is used for importing the first sitting posture image and/or the second sitting posture image into a key point identification model and outputting a key point image marked with a plurality of preset key points; the sitting posture type recognition module is used for importing the key point image into a sitting posture type recognition model and outputting a target sitting posture type; the sitting posture type recognition model is obtained by training based on a training image set corresponding to a plurality of preset candidate sitting posture types; the target sitting posture type is one of the preset candidate sitting posture types.
Optionally, the sitting posture type identifying module includes: the head rotation vector determining module is used for determining a head rotation vector based on the face feature information in the key point image; a head pose determination module to determine a head pose based on the head rotation vector; the human body rotation vector determining module is used for determining a human body rotation vector based on the human body characteristic information in the key point image; the human body posture determining module is used for determining the human body posture based on the human body rotation vector; and the target sitting posture type determining module is used for determining and outputting a target sitting posture type according to the head posture and the human body posture.
Optionally, the sitting posture information generating module includes: and the index threshold comparison module is used for comparing the index value with an index threshold corresponding to the target sitting posture type and generating the sitting posture information of the user based on a comparison result.
Optionally, the generating device further includes: the sitting posture information sending module is used for sending the sitting posture information to a user terminal so as to instruct the user terminal to generate a sitting posture report based on the sitting posture information; and/or sending the sitting posture information to an intelligent terminal to instruct the intelligent terminal to adjust based on the sitting posture information so as to adjust the sitting posture of the user.
It should be noted that, for the information interaction, the execution process, and other contents between the above-mentioned apparatuses, the specific functions and the technical effects of the embodiments of the method of the present application are based on the same concept, and specific reference may be made to the section of the embodiments of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 11 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 11, the terminal device 11 of this embodiment includes: at least one processor 110 (only one processor is shown in fig. 11), a memory 111, and a computer program 112 stored in the memory 111 and executable on the at least one processor 110, the steps of any of the various method embodiments described above being implemented when the computer program 112 is executed by the processor 110.
The terminal device 11 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 11 is only an example of the terminal device 11, and does not constitute a limitation to the terminal device 11, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include an input/output device, a network access device, and the like.
The Processor 110 may be a Central Processing Unit (CPU), and the Processor 110 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may in some embodiments be an internal storage unit of the terminal device 11, such as a hard disk or a memory of the terminal device 11. In other embodiments, the memory 111 may also be an external storage device of the terminal device 11, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 11. Further, the memory 111 may also include both an internal storage unit and an external storage device of the terminal device 11. The memory 111 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for generating sitting posture information, comprising:
acquiring a first sitting posture image of a user through a first acquisition component, and acquiring a second sitting posture image of the user through a second acquisition component;
determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index; the characteristic index is used for describing the abnormal degree of the target sitting posture type;
determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image;
generating sitting posture information of the user based on the target sitting posture type and the index value.
2. The method for generating sitting posture information according to claim 1, wherein the feature index is a feature key point existing in both the first sitting posture image and the second sitting posture image;
the determining an index value corresponding to the feature index based on the first sitting posture image and the second sitting posture image includes:
acquiring a first acquisition position of the first acquisition component and acquiring a second acquisition position of the second acquisition component;
determining a first deviation angle corresponding to the characteristic indicator in the first sitting posture image based on the first acquisition position;
determining a second deviation angle corresponding to the characteristic indicator in the second sitting posture image based on the second acquisition position;
and calculating an index value corresponding to the characteristic index based on the first acquisition position, the second acquisition position, the first deviation angle and the second deviation angle.
3. The method for generating sitting posture information according to claim 2, wherein the calculating an index value corresponding to the characteristic index based on the first collecting position, the second collecting position, the first deviation angle and the second deviation angle comprises:
marking and connecting the first acquisition position and the second acquisition position in a preset space coordinate system to obtain a reference connecting line;
constructing a space triangle in the space coordinate system according to the first deviation angle and the second deviation angle corresponding to the reference connecting line and the characteristic index; the spatial triangle takes the characteristic index as a vertex and the reference connecting line as a side, and the reference connecting line is opposite to the characteristic index in the spatial triangle;
and determining index values corresponding to the characteristic indexes according to the characteristic values of the space triangle in a plurality of preset dimensions.
4. The method of generating sitting posture information of claim 1, wherein the determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image comprises:
importing the first sitting posture image and/or the second sitting posture image into a key point recognition model, and outputting a key point image marked with a plurality of preset key points;
importing the key point image into a sitting posture type recognition model, and outputting a target sitting posture type; the sitting posture type recognition model is obtained by training based on a training image set corresponding to a plurality of preset candidate sitting posture types; the target sitting posture type is one of the preset candidate sitting posture types.
5. The sitting posture information generating method as claimed in claim 4, wherein the importing the key point image into a sitting posture type recognition model and outputting a target sitting posture type comprises:
determining a head rotation vector based on the face feature information in the key point image;
determining a head pose based on the head rotation vector;
determining a human body rotation vector based on human body feature information in the key point image;
determining a human body posture based on the human body rotation vector;
and determining and outputting a target sitting posture type according to the head posture and the human body posture.
6. The method for generating sitting posture information according to claim 1, wherein the generating sitting posture information of the user based on the target sitting posture class and the index value comprises:
and comparing the index value with an index threshold corresponding to the target sitting posture type, and generating the sitting posture information of the user based on a comparison result.
7. The method for generating sitting posture information according to any one of claims 1 to 6, wherein after the generating of the sitting posture information of the user based on the target sitting posture type and the index value, further comprises:
sending the sitting posture information to a user terminal to instruct the user terminal to generate a sitting posture report based on the sitting posture information; and/or
And sending the sitting posture information to an intelligent terminal to indicate the intelligent terminal to adjust based on the sitting posture information so as to adjust the sitting posture of the user.
8. A sitting posture information generating apparatus, comprising:
the acquisition module is used for acquiring a first sitting posture image of a user through the first acquisition component and acquiring a second sitting posture image of the user through the second acquisition component;
a target sitting posture type determination module for determining a target sitting posture type of the user based on the first sitting posture image and/or the second sitting posture image; the target sitting posture type is associated with at least one characteristic index;
the characteristic index determining module is used for determining an index value corresponding to the characteristic index based on the first sitting posture image and the second sitting posture image;
and the sitting posture information generating module is used for generating the sitting posture information of the user based on the target sitting posture type and the index value.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110047353.7A 2021-01-14 2021-01-14 Sitting posture information generation method and device, terminal equipment and storage medium Pending CN112712053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110047353.7A CN112712053A (en) 2021-01-14 2021-01-14 Sitting posture information generation method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110047353.7A CN112712053A (en) 2021-01-14 2021-01-14 Sitting posture information generation method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112712053A true CN112712053A (en) 2021-04-27

Family

ID=75548992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110047353.7A Pending CN112712053A (en) 2021-01-14 2021-01-14 Sitting posture information generation method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112712053A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297938A (en) * 2021-05-17 2021-08-24 深圳市优必选科技股份有限公司 Sitting posture monitoring method and device, electronic equipment and storage medium
CN113657271A (en) * 2021-08-17 2021-11-16 上海科技大学 Sitting posture detection method and system combining quantifiable factors and non-quantifiable factors for judgment
CN114596633A (en) * 2022-03-04 2022-06-07 海信集团控股股份有限公司 Sitting posture detection method and terminal
CN115100833A (en) * 2022-06-23 2022-09-23 安徽省宜尚智能家居有限公司 Intelligent conference table control system
CN116884083A (en) * 2023-06-21 2023-10-13 圣奥科技股份有限公司 Sitting posture detection method, medium and equipment based on key points of human body

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
CN111931640A (en) * 2020-08-07 2020-11-13 上海商汤临港智能科技有限公司 Abnormal sitting posture identification method and device, electronic equipment and storage medium
CN112101124A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Sitting posture detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
CN111931640A (en) * 2020-08-07 2020-11-13 上海商汤临港智能科技有限公司 Abnormal sitting posture identification method and device, electronic equipment and storage medium
CN112101124A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Sitting posture detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张鸿宇;刘威;许炜;王辉;: "基于深度图像的多学习者姿态识别", 计算机科学, no. 09 *
曾星;罗武胜;孙备;鲁琴;刘涛诚;: "基于深度图像的嵌入式人体坐姿检测系统的实现", 计算机测量与控制, no. 09 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297938A (en) * 2021-05-17 2021-08-24 深圳市优必选科技股份有限公司 Sitting posture monitoring method and device, electronic equipment and storage medium
CN113657271A (en) * 2021-08-17 2021-11-16 上海科技大学 Sitting posture detection method and system combining quantifiable factors and non-quantifiable factors for judgment
CN113657271B (en) * 2021-08-17 2023-10-03 上海科技大学 Sitting posture detection method and system combining quantifiable factors and unquantifiable factor judgment
CN114596633A (en) * 2022-03-04 2022-06-07 海信集团控股股份有限公司 Sitting posture detection method and terminal
CN115100833A (en) * 2022-06-23 2022-09-23 安徽省宜尚智能家居有限公司 Intelligent conference table control system
CN116884083A (en) * 2023-06-21 2023-10-13 圣奥科技股份有限公司 Sitting posture detection method, medium and equipment based on key points of human body

Similar Documents

Publication Publication Date Title
CN112712053A (en) Sitting posture information generation method and device, terminal equipment and storage medium
CN111414780B (en) Real-time intelligent sitting posture distinguishing method, system, equipment and storage medium
US10095030B2 (en) Shape recognition device, shape recognition program, and shape recognition method
CN107194361B (en) Two-dimensional posture detection method and device
US10319104B2 (en) Method and system for determining datum plane
CN112101124B (en) Sitting posture detection method and device
CN109343700B (en) Eye movement control calibration data acquisition method and device
CN112101123A (en) Attention detection method and device
KR20140086463A (en) Image transformation apparatus and the method
WO2020020022A1 (en) Method for visual recognition and system thereof
CN110472460A (en) Face image processing process and device
CN104036169A (en) Biometric authentication method and biometric authentication device
WO2021164678A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
CN103809741A (en) Electronic device and method for determining depth of 3D object image in 3D environment image
CN111163303A (en) Image display method, device, terminal and storage medium
CN110780742A (en) Eyeball tracking processing method and related device
CN110503068A (en) Gaze estimation method, terminal and storage medium
EP3699808B1 (en) Facial image detection method and terminal device
CN108628442A (en) A kind of information cuing method, device and electronic equipment
CN110222651A (en) A kind of human face posture detection method, device, terminal device and readable storage medium storing program for executing
CN106295288B (en) A kind of information calibration method and device
KR20190079503A (en) Apparatus and method for registering face posture for face recognition
US20230020578A1 (en) Systems and methods for vision test and uses thereof
CN104688177A (en) Terminal-based pupil distance measuring method, pupil distance measuring device, server and pupil distance measuring system
CN112733740A (en) Attention information generation method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination