CN114612939A - Sitting posture identification method and device based on TOF camera and intelligent desk lamp - Google Patents

Sitting posture identification method and device based on TOF camera and intelligent desk lamp Download PDF

Info

Publication number
CN114612939A
CN114612939A CN202210299958.XA CN202210299958A CN114612939A CN 114612939 A CN114612939 A CN 114612939A CN 202210299958 A CN202210299958 A CN 202210299958A CN 114612939 A CN114612939 A CN 114612939A
Authority
CN
China
Prior art keywords
image
sitting posture
target
target image
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210299958.XA
Other languages
Chinese (zh)
Other versions
CN114612939B (en
Inventor
潘颢文
张勇
肖澎臻
赵荣杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Shixi Technology Co Ltd
Original Assignee
Zhuhai Shixi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Shixi Technology Co Ltd filed Critical Zhuhai Shixi Technology Co Ltd
Priority to CN202210299958.XA priority Critical patent/CN114612939B/en
Publication of CN114612939A publication Critical patent/CN114612939A/en
Application granted granted Critical
Publication of CN114612939B publication Critical patent/CN114612939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a sitting posture identification method and device based on a TOF camera and an intelligent desk lamp, which are used for realizing sitting posture identification through a depth image, reducing the calculation force required by an identification algorithm and improving the real-time performance of an identification result. The method comprises the following steps: acquiring a depth image of a human body sitting posture; performing region growth by taking the mass center of the depth image as a seed point to obtain a target image; fitting the effective points in the target image into a target plane; determining a forward-backward tilt parameter through the normal vector direction of the target plane; determining a left-right inclination parameter according to the horizontal coordinate difference of the vertex and the centroid in the target image; and carrying out sitting posture identification according to the forward and backward inclination parameters and the leftward and rightward inclination parameters.

Description

Sitting posture identification method and device based on TOF camera and intelligent desk lamp
Technical Field
The application relates to the technical field of image processing, in particular to a sitting posture identification method and device based on a TOF camera and an intelligent desk lamp.
Background
Different sitting postures can express different states of the user, the sitting posture type and the sitting posture state of people can be recognized through a sitting posture recognition technology, and the sitting posture type and the sitting posture state are fed back to the user, so that sitting posture intervention is realized.
In the prior art, a plurality of sitting posture identification methods are available, which are mainly realized by a sensor in the past, and the method realized by the sensor has the advantage of high accuracy, but the sensor installation process is troublesome, the cost is high, and users feel uncomfortable easily; with the development of science and technology, the machine learning method is mainly realized in recent years, but the machine learning method needs a large amount of manpower and material resources, the detection accuracy is related to the high quality degree of a training set, misjudgment can occur if some samples are not related, and more convolution layers can be needed when the machine learning is to realize high precision, which means higher computational power consumption, longer operation time and low real-time property.
In summary, the sitting posture recognition method in the prior art has the problems of high computational power consumption and inconvenient use.
Disclosure of Invention
The application provides a sitting posture identification method and device based on a TOF camera and an intelligent desk lamp, which are used for realizing sitting posture identification through a depth image, reducing the calculation force required by an identification algorithm and improving the real-time property of an identification result.
The application provides a sitting posture identification method based on a TOF camera in a first aspect, and the method comprises the following steps:
acquiring a depth image of a human body sitting posture;
performing region growth by taking the mass center of the depth image as a seed point to obtain a target image;
fitting the effective points in the target image into a target plane;
determining a forward-backward tilt parameter through the normal vector direction of the target plane;
determining a left-right inclination parameter according to the horizontal coordinate difference of the vertex and the centroid in the target image;
and carrying out sitting posture identification according to the forward and backward inclination parameters and the leftward and rightward inclination parameters.
Optionally, after performing region growing by using the centroid of the depth image as a seed point to obtain a target image, and before fitting the effective points in the target image to a target plane, the method further includes:
and carrying out erosion processing on the target image until the vertex ordinate in the target image changes suddenly.
Optionally, after the erosion processing is performed on the target image until the vertex ordinate in the target image changes abruptly, the method further includes:
judging whether the human body in the target image is in an optimal identification area or not according to the abscissa of the vertex;
the fitting of the effective points in the target image to a target plane comprises:
and when the human body in the target image is in the optimal recognition area, fitting the effective points in the target image into a target plane.
Optionally, after the determining whether the human body in the target image is in the optimal recognition area according to the abscissa of the vertex, the method further includes:
if not, prompting the user to adjust according to the abscissa of the vertex.
Optionally, the fitting the effective points in the target image to the target plane includes:
fitting the effective points in the target image by a least square method to obtain a fitting plane;
calculating the variance and standard deviation of the fitting plane and the depth value of the target image;
removing outliers in the target image, wherein the outliers are points exceeding a preset multiple of the standard deviation;
and fitting the effective points in the target image again until a preset condition is met to obtain a target plane.
Optionally, the preset conditions are:
no outlier is found in the target image;
or the like, or a combination thereof,
the fitting times reach the maximum fitting times.
Optionally, before the sitting posture identification is performed according to the forward-backward inclination parameter and the leftward-rightward inclination parameter, the method further includes:
determining an occlusion parameter according to the features of the depth image;
the sitting posture identification according to the forward-backward inclination parameter and the leftward-rightward inclination parameter comprises the following steps:
and carrying out sitting posture identification according to the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters.
Optionally, the recognizing the sitting posture according to the forward-backward inclination parameter, the leftward-rightward inclination parameter and the shielding parameter includes:
converting the forward-backward tilt parameter, the leftward-rightward tilt parameter and the shielding parameter into confidence parameters;
and determining the sitting posture type of the user according to the confidence parameter.
Optionally, before the sitting posture identification is performed according to the forward-backward inclination parameter and the leftward-rightward inclination parameter, the method further includes:
judging whether occlusion exists in the depth image;
the sitting posture identification according to the forward-backward inclination parameter and the leftward-rightward inclination parameter comprises the following steps:
if not, carrying out sitting posture identification according to the forward and backward leaning parameters and the leftward and rightward leaning parameters.
Optionally, the determining whether an occlusion exists in the depth image includes:
extracting a target depth interval image from the depth image, wherein the target depth interval image is a depth interval image in which a human body is located;
determining a subject region in the target depth interval image;
calculating the dispersion degree of the effective points of the main body area through a target formula to obtain a first calculation result;
and judging whether occlusion exists in the depth image according to the first calculation result.
A second aspect of the present application provides an intelligent desk lamp, which performs the TOF camera-based sitting posture recognition method optional in any one of the first aspect and the first aspect.
The third aspect of the present application provides a sitting posture recognition apparatus based on a TOF camera, including:
the acquisition unit is used for acquiring a depth image of the human body sitting posture;
the processing unit is used for carrying out region growth by taking the mass center of the depth image as a seed point to obtain a target image;
the fitting unit is used for fitting the effective points in the target image into a target plane;
the first determining unit is used for determining a forward-backward inclination parameter through the normal vector direction of the target plane;
the second determining unit is used for determining a left-right inclination parameter through the horizontal coordinate difference of the vertex and the centroid in the target image;
and the recognition unit is used for recognizing the sitting posture according to the forward-backward inclination parameter and the leftward-rightward inclination parameter.
Optionally, the apparatus further comprises:
and the erosion unit is used for carrying out erosion processing on the target image until the vertex ordinate in the target image changes suddenly.
Optionally, the apparatus further comprises:
the first judgment unit is used for judging whether the human body in the target image is in the optimal identification area or not according to the abscissa of the vertex;
the fitting unit is specifically configured to:
and when the judgment result of the first judgment unit is yes, fitting the effective points in the target image into a target plane.
Optionally, the apparatus further comprises:
and the prompting unit is used for prompting a user to adjust according to the abscissa of the vertex when the judgment result of the first judging unit is negative.
Optionally, the fitting unit is further specifically configured to:
fitting the effective points in the target image by a least square method to obtain a fitting plane;
calculating the variance and standard deviation of the fitting plane and the depth value of the target image;
removing outliers in the target image, wherein the outliers are points exceeding a preset multiple of the standard deviation;
and fitting the effective points in the target image again until a preset condition is met to obtain a target plane.
Optionally, the preset conditions are:
no outliers are found in the target image;
or the like, or, alternatively,
the fitting times reach the maximum fitting times.
Optionally, the apparatus further comprises:
a third determining unit, configured to determine an occlusion parameter according to a feature of the depth image;
the identification unit is specifically configured to:
and carrying out sitting posture identification according to the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters.
Optionally, the identification unit is further specifically configured to:
converting the forward-backward tilt parameter, the leftward-rightward tilt parameter and the shielding parameter into confidence parameters;
and determining the sitting posture type of the user according to the confidence parameter.
Optionally, the apparatus further comprises:
a second judging unit, configured to judge whether there is occlusion in the depth image;
the identification unit is specifically configured to:
and when the judgment result of the second judgment unit is negative, carrying out sitting posture identification according to the forward and backward inclination parameter and the leftward and rightward inclination parameter.
Optionally, the second judging unit is specifically configured to:
extracting a target depth interval image from the depth image, wherein the target depth interval image is a depth interval image in which a human body is located;
determining a subject region in the target depth interval image;
calculating the dispersion degree of the effective points of the main body area through a target formula to obtain a first calculation result;
and judging whether occlusion exists in the depth image according to the first calculation result.
The present application in a fourth aspect provides a sitting posture identifying apparatus based on a TOF camera, the apparatus comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform the TOF camera based sitting posture recognition method of any one of the first aspect and the first aspect.
A fifth aspect of the present application provides a computer-readable storage medium having a program stored thereon, where the program, when executed on a computer, performs the TOF camera-based sitting posture identifying method of any one of the first aspect and the first aspect.
According to the technical scheme, the method has the following advantages:
according to the sitting posture identification method based on the TOF camera, the human body fitting in the depth image is simplified into a plane model (target plane), the forward and backward inclination degree of the human body is quantified through the normal vector pointing of the target plane, and the left and right inclination degree of the human body is quantified through the horizontal coordinate difference of the top point and the mass center in the depth image, so that the human body sitting posture identification can be realized. According to the sitting posture identification method based on the TOF camera, a huge training set is not needed for establishing a model, an excessively deep convolution network is not needed, and a user is not needed to preset a standard sitting posture in advance, so that the calculation power consumption is low, the user is convenient to use, and the real-time performance is high.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating an embodiment of a sitting posture recognition method based on a TOF camera according to the present application;
fig. 2 is a schematic diagram of a terminal placement in the sitting posture recognition method based on a TOF camera provided by the present application;
FIG. 3 is a schematic diagram illustrating normal vector pointing of a target plane in the sitting posture recognition method based on a TOF camera according to the present application;
FIG. 4 is a schematic flow chart illustrating another embodiment of a sitting posture recognition method based on a TOF camera according to the present application;
FIG. 5 is a schematic diagram illustrating a process of erosion processing in the sitting posture identification method based on TOF camera provided by the present application;
FIG. 6 is a schematic flow chart illustrating a sitting posture recognition method based on a TOF camera according to another embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an embodiment of a sitting posture recognition apparatus based on a TOF camera provided by the present application;
FIG. 8 is a schematic structural diagram of another embodiment of a sitting posture identifying apparatus based on a TOF camera provided by the present application;
fig. 9 is a schematic structural diagram of a physical structure of a sitting posture recognition apparatus based on a TOF camera provided in the present application.
Detailed Description
The application provides a sitting posture identification method and device based on a TOF camera and an intelligent desk lamp, which are used for realizing sitting posture identification through a depth image, reducing the calculation force required by an identification algorithm and improving the real-time property of an identification result.
It should be noted that the sitting posture identification method based on the TOF camera provided by the present application may be applied to a terminal, and may also be applied to a server, for example, the terminal may be a smart desk lamp, a depth camera, a smart phone or a computer, a tablet computer, a smart television, a smart watch, a portable computer terminal, or a fixed terminal such as a desktop computer. For convenience of explanation, the terminal is taken as an execution subject for illustration in the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating an embodiment of a sitting posture recognition method based on a TOF camera according to the present invention, the method includes:
101. acquiring a depth image of a human body sitting posture;
in the embodiment, the sitting posture recognition is performed by using the depth image, and when the terminal performs the sitting posture recognition, the terminal shoots the sitting posture of the user by using the depth camera to obtain the depth image of the sitting posture of the user. Specifically, the terminal acquires a stream from the depth camera and performs depth gating to obtain a depth image of the human body sitting posture.
In some specific embodiments, for example, the terminal is an intelligent desk lamp, the desk lamp is placed on a desktop during use, the depth lens carried on the desk lamp is mainly aimed at the upper half of the user for shooting, the specific placement position is as shown in fig. 2, the distance between the depth lens and the human body is about 700 and 1000mm, and the obtained depth image mainly includes a depth image of the upper half of the human body.
102. Performing region growth by taking the centroid of the depth image as a seed point to obtain a target image;
region growing is a method for gathering pixel points according to similar properties of pixels in the same object region, and from an initial region (centroid), adjacent pixels with the same properties are merged into the current region so as to gradually increase the region until there is no point that can be merged. It should be noted that the centroid of the depth image refers to an average value of coordinates of all valid points in the depth image, and a valid point in this application refers to a point whose pixel value is not 0.
In most sitting posture recognition scenes, for example, in an environment of recognizing sitting postures based on a desk lamp, when a person sits and faces the desk lamp, the person occupies most of the visual field of a depth camera, so that the centroid of an image is selected as a seed point to perform region growth, the region where the person is located grows out, and most background points are removed, so that a human body sitting posture saliency map, namely a target image, can be obtained.
103. Fitting the effective points in the target image into a target plane;
after the terminal obtains the target image, the human body in the target image is simplified into a plane model, namely, the effective points in the target image are fitted into a target plane, so that the sitting posture of the human body is reflected through the characteristics of the target plane.
Specifically, the terminal fits the effective points in the target image to a target plane By a least square method, and the target plane equation is Ax + By + C ═ z, where x and y are horizontal and vertical coordinates, z is the corresponding depth value, and A, B, C is the equation coefficient to be fitted.
104. Determining a forward-backward tilt parameter through the normal vector direction of the target plane;
as shown in fig. 3, since the normal vector of the target plane is a straight line perpendicular to the target plane, when the posture of the human body is forward leaning, the target plane correspondingly fitted in step 103 also appears forward leaning, and the normal vector of the target plane is downward, and the result of multiplying the normal vector of the vertical upward is less than 0. And conversely, when the human body is in a backward tilting posture, the target plane is in a backward tilting state, the normal vector of the target plane is upward at the moment, and the result of multiplication of the normal vector and the vertical upward normal vector is greater than 0, so that the forward and backward tilting postures of the sitting posture of the human body can be determined through the normal vector of the target plane, and the forward and backward tilting degrees of the sitting posture of the human body can be further determined through the cosine value of the included angle between the normal vector of the target plane and the unit vector in the vertical direction.
Specifically, after the fitting is finished, the terminal calculates a cosine value Cos θ of an included angle between a normal vector n (a, -B, -1) of the target plane and a unit vector n0(0, 1, 0) in the vertical direction, specifically obtained by a formula one:
Figure BDA0003564976830000081
105. determining a left-right inclination parameter through the difference distance of the horizontal coordinates of the top point and the mass center in the target image;
the vertex in the target image reflects the position of the highest point of the human body, the degree of the left and right inclination of the human body can be reflected by calculating the horizontal coordinate difference dis _ x between the vertex in the target image and the centroid of the target image, and the larger the horizontal coordinate difference dis _ x is, the larger the left and right inclination degree is correspondingly.
106. And carrying out sitting posture identification according to the forward and backward inclination parameters and the leftward and rightward inclination parameters.
Through the forward-backward inclination parameter and the leftward-rightward inclination parameter determined in the step 105, whether the human sitting posture inclines forwards, backwards, leftwards and rightwards can be judged, and the degree of the human sitting posture inclining forwards, backwards, leftwards and rightwards can be reflected according to the parameter, so that sitting posture identification is realized. The specific sitting posture identification can be to determine the specific sitting posture of the human body by combining the forward and backward inclination parameters and the leftward and rightward inclination parameters, further, the terminal can also determine the current sitting posture of the human body by combining the forward and backward inclination parameters and the forward and backward inclination parameters with the sitting posture correction sample parameters, determine whether the current sitting posture of the human body is the sitting posture correction sample parameter, and accordingly send a sitting posture adjustment prompt to the user.
In some embodiments, the user's sitting posture category may be determined by measuring standard values for the pitch parameter and the roll parameter while sitting upright, and then comparing the pitch parameter and the roll parameter to the standard values, e.g., cos θ < -0.3 considers forward lean and cos θ >0.7 considers backward lean; if dis _ x >20 is considered a left tilt, if dis _ x < -20 is considered a right tilt.
In the embodiment, the human body fitting in the depth image is simplified into a plane model (target plane), the forward and backward inclination degree of the human body is quantified through the normal vector direction of the target plane, and the leftward and rightward inclination degree of the human body is quantified through the horizontal coordinate difference between the vertex and the centroid in the depth image, so that the human body sitting posture identification can be realized. According to the sitting posture identification method based on the TOF camera, a huge training set is not needed for establishing a model, an excessively deep convolution network is not needed, and a user is not needed to preset a standard sitting posture in advance, so that the calculation power consumption is low, the user is convenient to use, and the real-time performance is high.
In the application, because there is no content detail in the depth image when the depth image is used for sitting posture identification, if the clothes of the user and other factors cause the interference of lens shielding, the error identification or even the identification cannot be easily caused, and therefore the interference caused by the shielding factor needs to be solved when the depth image is used for sitting posture identification. The application provides two different modes to get rid of and shelter from the interference that brings to the position appearance recognition result: one considers the occlusion as one of the sitting postures, and identifies the sitting postures together with other sitting postures; secondly, whether shelter from the judgement that whether carries on earlier before discerning, just carry out the position of sitting and discern when confirming that does not have the shelter from, explain below respectively:
one of the two ways of sitting posture identification is to incorporate the occlusion:
referring to fig. 4, fig. 4 is a diagram illustrating another embodiment of a sitting posture recognition method based on a TOF camera according to the present invention, the method includes:
401. acquiring a depth image of a human body sitting posture;
402. performing region growth by taking the mass center of the depth image as a seed point to obtain a target image;
in this embodiment, steps 401 to 402 are similar to steps 101 to 102 of the previous embodiment, and are not described again here.
403. Carrying out erosion processing on the target image until the vertex ordinate in the target image changes suddenly;
the terminal firstly carries out median filtering on the target image to remove noise in the target image and then carries out erosion processing on the target image.
The image erosion is to use an erosion kernel which gradually increases, the width of the erosion kernel is fixed to be 10, the length of the erosion kernel increases from 10, and the image erosion is performed until the vertex ordinate shows abrupt change; the reason is that the length and width of the initial erosion kernel are small, most of the head cannot be eroded completely, and the neck is not thick, so that the head cannot be eroded completely, and vertex ordinate sudden change occurs, wherein the sudden change means that the difference value of the new vertex ordinate value compared with the last value is larger than a preset value, the preset value needs to be specifically set according to the resolution of the depth image and the height of the head of the human body, and the preset value is not limited herein. The head of a person is eroded after erosion and is reflected in the target image, the joint of the neck and the shoulders is left as the vertex of the image, and because sitting posture identification is mainly carried out on the upper half of the human body, the erosion of the head can eliminate interference on an identification result caused by offset rotation of the head of a user.
Referring to fig. 5, during the erosion process, the neck portion with the smallest width is eroded first, then the head portion is eroded, the vertex coordinate is the coordinate of the head portion before the head portion is eroded, but after the head portion is eroded, the longitudinal coordinate of the vertex changes suddenly due to the fact that the neck portion is eroded, and the neck portion falls on the position of the 'bottom of the neck', and the erosion is finished.
404. Judging whether the human body in the target image is in the optimal recognition area or not according to the abscissa of the vertex, if not, executing a step 405, and if so, executing a step 406;
if the human body in the target image is positioned in the middle of the image, a more accurate sitting posture identification result is obtained. After the target image is eroded by the terminal in step 403, the neck portion of the target image is left as the vertex of the image, and the terminal may determine whether the human body is in the optimal recognition area according to the abscissa of the vertex. If the human body is in the optimal recognition area, the terminal proceeds to 406 to continue the sitting posture recognition. If the human body is not in the optimal recognition area, the terminal performs step 405 to prompt the user to make an adjustment.
It should be noted that the optimal recognition area is the central area of the image, and there may be a deviation of 5% of the image length between the left and right, for example, 160 × 120, and then the central area of the image refers to the position of the abscissa 72-88.
405. Prompting a user to adjust according to the abscissa of the vertex;
and if the human body in the target image is not in the optimal identification area, namely the abscissa of the vertex in the target image is not in the central position of the image, the terminal prompts a user to adjust according to the offset of the abscissa.
Specifically, taking a 160 × 120 target image as an example, if the vertex abscissa is greater than 88, the user is prompted to move the seat to the right or move the lens to the left, and if the vertex coordinate is less than 72, the user is prompted to move the seat to the left or move the lens to the right.
406. Fitting the effective points in the target image into a target plane;
if the human body in the target image is in the optimal recognition area, the recognition of the sitting posture is formally started. The terminal simplifies the human body in the target image into a plane model, namely, the effective points in the target image are fitted into a target plane.
Specifically, the terminal fits the effective points of the target image into a plane by a least square method. The plane equation is Ax + By + C-z (x and y are horizontal and vertical coordinates, z is the corresponding depth value, A, B, C is the equation coefficient to be fitted), but in practice points on the image are not ideal, so the variance expression is introduced, var ═ Σ (Ax + By + C-z)2
Then we need to calculate A, B, C the smaller the variance, the closer the fitted variance is to the position of the original coordinates, so let the partial derivative be 0, we get:
Figure BDA0003564976830000111
Figure BDA0003564976830000112
Figure BDA0003564976830000113
substituting the coordinates into the solution of the equation set to obtain A, B, C value, namely obtaining a fitting plane;
and then the terminal calculates the variance of the depth values of the image corresponding to the fitting plane and the target image, specifically, the variance of the depth difference is obtained by subtracting the pixels of the image and the target image. And then removing outliers exceeding a preset multiple standard deviation in the target image, then fitting again until no outliers exist or the fitting frequency reaches the maximum fitting frequency, and determining the final fitting plane as the target plane.
In some specific embodiments, the preset multiple may be set to 3 times or 5 times, preferably 5 times. The maximum fitting times are set according to requirements, because the fitting operation needs time, the result can be lost due to excessive fitting times, and the maximum fitting times are preferably set to be 10 times.
407. Determining a forward-backward inclination parameter through the normal vector direction of the target plane;
as shown in fig. 3, since the normal vector of the target plane is a straight line perpendicular to the target plane, when the posture of the human body is forward leaning, the target plane correspondingly fitted in step 103 also appears forward leaning, and the normal vector of the target plane is downward, and the result of multiplying the normal vector of the vertical upward is less than 0. On the contrary, when the posture of the human body is backward inclined, the target plane is backward inclined, the normal vector of the target plane is upward at the moment, and the result of multiplication of the normal vector and the vertical upward normal vector is larger than 0, so the forward and backward inclined postures of the sitting posture of the human body can be determined through the normal vector of the target plane, and the forward and backward inclined degrees of the sitting posture of the human body can be further determined through the cosine value of the included angle between the normal vector of the target plane and the unit vector in the vertical direction.
After the target plane fitting is finished, the terminal calculates the cosine value of the included angle between a normal vector n (A, -B, -1) of the target plane and a unit vector n0(0, 1, 0) in the vertical direction:
Figure BDA0003564976830000121
408. determining a left-right inclination parameter according to the horizontal coordinate difference of the top point and the mass center in the target image;
after the target image is eroded, the vertex in the target image represents the body region of the human body, namely the highest point from the chest to the abdomen region, the terminal calculates the horizontal coordinate difference dis _ x between the vertex and the centroid in the eroded target image, and the left-right inclination is judged by using the dis _ x.
Since the head of the user does not necessarily shift when the user leans left or right, the head of the user is eroded, and the left or right leaning parameters are calculated by using the eroded human body vertexes, so that the interference of the head position shift on the left or right leaning sitting posture recognition can be eliminated.
409. Determining an occlusion parameter according to the characteristics of the depth image;
in this embodiment, the occlusion is also determined to be one of sitting postures, so the terminal also needs to determine the occlusion parameter according to the features of the depth image.
The terminal extracts a target depth interval image from the depth image, and then reflects the shielding condition through the discrete degree of a main body area in the target depth interval image. The principle is that the target depth interval image is obtained by performing depth division on the depth image, the target depth interval image corresponds to a depth interval where a human body is located, the depth lens faces a user for shooting, the depth lens can shoot a relatively complete human body under the condition that shielding does not exist, effective points reflected on the target depth interval image and expressed as a main body area are connected into a whole, namely the discrete degree of the main body area is small. However, under the condition of existence of shielding, the shielded part of the human body cannot be shot, the shielding position can cause depth separation, the image is represented as an area with blank on the target depth interval image, the effective points of the main body area cannot be connected into one piece, and the discrete degree is large. Therefore, the degree of dispersion of the main body region can reflect whether the target depth interval image has depth separation and the degree of depth separation, that is, whether occlusion exists and the degree of occlusion.
Specifically, the terminal executes the following steps to extract the features reflecting the occlusion parameters in the depth image:
a. extracting a target depth interval image from the depth image;
the target depth interval image is a depth interval image in which a human body is located, a terminal removes a background part in the depth image, then depth division is carried out on the depth image to obtain a plurality of depth interval images, effective point occupation ratios in the depth interval images are calculated respectively, and two depth interval images with the highest effective point occupation ratio are determined as the target depth interval images.
It should be noted that, in a scene where sitting posture recognition is actually performed, a human is close to the terminal, so that a large part of the area in the depth image is occupied by the human, and the terminal can distinguish the depth interval where the human is located through the feature. In specific application, after multiple tests, most pixel points of a human body are included in two depth sections with the highest effective point ratio, so that the two depth sections with the highest effective point ratio are determined as target depth sections, and two corresponding target depth section images are extracted and recorded as image1 and image 2.
b. Determining a main body area and an edge area in the target depth interval image;
the terminal respectively extracts a main body area and a marginal area from two target depth interval images, wherein the main body area refers to the area from the chest to the abdomen of a human body, and the marginal area refers to the area above the chest, below the shoulders and below the abdomen of the human body.
c. Calculating the degree of dispersion var1, var2 of the subject region;
the terminal calculates the degrees of dispersion var1 and var2 of the subject regions in the two images by the target formula.
Specifically, the target formula is:
var=∑(x-average(x))2/n;
average(x)=∑x/n;
where var represents the degree of dispersion, i.e., the variance of the effective points in the body region, x represents the abscissa of the effective points in the body region, and n represents the number of effective points in the body region.
d. Calculating the effective points eta 1 and eta 2 of the edge region, and calculating the effective points eta 3 and eta 4 of the main body region;
considering that the effect of the wrinkles of the clothes on the main body area is similar to the occlusion, but the two cases can be distinguished by the edge area, so that the number of effective points of the edge area and the number of effective points of the main body area in the two target depth interval images also need to be calculated.
The shielding condition in the depth image can be reflected by combining the discrete degree of the main body area, the effective points of the main body area and the effective points of the edge area, so that the shielding parameters are determined.
410. Converting the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters into confidence parameters;
the forward-backward inclination degree, the left-right inclination degree of the user can be reflected through the forward-backward inclination parameter and the left-right inclination parameter, and the sitting posture identification is preliminarily realized, for example, in a specific situation, if cos theta < -0.3, the sitting posture is considered to be forward inclined, and if cos theta >0.7, the sitting posture is considered to be backward inclined; if dis _ x >20, a left tilt is considered, and if dis _ x < -20, a right tilt is considered. However, this method requires the user to correctly position the camera, otherwise, the recognition result will be affected to some extent.
Therefore, in this embodiment, further, the requirement for placing the depth camera is reduced by setting a confidence parameter, and various parameters are balanced. The normal sitting posture is 0.5, other confidence parameters are more than 0.5 in the corresponding sitting posture, and the confidence parameters of the corresponding sitting posture are not exceeded in the non-corresponding sitting posture. The confidence parameter is set by combining the A in the normal vector (A, -B, -1) of the target plane, so that the requirement on the placement of the camera is further reduced, and more accurate sitting posture identification can be realized under the shooting of various angles.
The terminal converts the forward-backward inclination parameter (cosine value cos theta of an included angle between a normal vector of a plane equation and the vertical direction), the left-right inclination parameter (difference dis _ x between the image vertex and the centroid) and the shielding parameter shelter which are calculated in the steps into confidence parameters together with the plane equation coefficient A, B, C of the target plane, so as to estimate the sitting posture, and particularly determines the sitting posture corresponding to the item with the largest confidence parameter as the estimated sitting posture.
In some embodiments, the confidence parameter is set as follows:
(u (t) is a unit step function, and when t >0, u (t) is 1, and when t <0, u (t) is 0;
(1) forward inclination parameters: (-0.1-cos θ)/0.1+ 5A;
(2) retroversion parameters: (cos θ -0.5)/0.1;
(3) left inclination parameter: dis _ x/17-u (a) × 1.5A + u (-a) × max (2A, -0.4);
(4) right tilt parameters: -dis _ x/17+ u (-a) × 1.5A × u (a) × min (2A, 0.4);
(5) and (3) shielding parameters:
exp(max(max(var1,var2)/2.3,(var1+var2)/4.2)/2*1.33-|η1–η2|/450+|η3-η4|/450)/2*0.33)/2;
(η 1, η 2 are the number of valid points of the edge region, η 3, η 4 are the number of valid points of the body region);
(6) normal sitting posture parameters: 0.5.
411. and determining the sitting posture type of the user according to the confidence parameter.
And the terminal respectively calculates the value of each confidence parameter, and finally determines the sitting posture corresponding to the item with the maximum value as the sitting posture type of the user to realize sitting posture estimation.
In the embodiment, the human body fitting in the depth image is simplified into a plane model (target plane), the forward and backward inclination degree of the human body is quantified through the normal vector direction of the target plane, and the leftward and rightward inclination degree of the human body is quantified through the horizontal coordinate difference between the vertex and the centroid in the depth image, so that the human body sitting posture identification can be realized. According to the sitting posture identification method based on the TOF camera, a huge training set is not needed for establishing a model, an excessively deep convolution network is not needed, and a user is not needed to preset a standard sitting posture in advance, so that the calculation power consumption is low, the user is convenient to use, and the real-time performance is high.
Furthermore, the determination of the shielding parameters is also performed in the embodiment, the shielding is used as one of the sitting postures, the interference caused by the shielding is considered during the sitting posture identification, and a more accurate sitting posture identification result can be obtained.
Furthermore, the terminal converts the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters obtained through calculation into confidence parameters, balances all the parameters, and realizes final sitting posture identification through the confidence parameters, so that the requirement for placing the camera is lowered, the identification accuracy is further improved, and the user experience is improved.
Secondly, firstly carrying out shielding judgment and then carrying out sitting posture identification:
referring to fig. 6, fig. 6 is a diagram illustrating another embodiment of a sitting posture recognition method based on a TOF camera according to the present application, the method includes:
601. acquiring a depth image of a human body sitting posture;
602. performing region growth by taking the centroid of the depth image as a seed point to obtain a target image;
603. fitting the effective points in the target image into a target plane;
604. determining a forward-backward inclination parameter through the normal vector direction of the target plane;
605. determining a left-right inclination parameter according to the horizontal coordinate difference of the top point and the mass center in the target image;
in this embodiment, step 601 is similar to steps 101 to 105 of the previous embodiments, and is not described herein again.
606. Judging whether occlusion exists in the depth image, if yes, returning to the step 601, and if not, executing the step 607;
in order to eliminate the interference of the occlusion factor on the recognition result, it is necessary to determine whether there is an occlusion in the depth image before the sitting posture recognition is performed.
Specifically, the terminal executes the following steps:
a. extracting a target depth interval image from the depth image, wherein the target depth interval image is a depth interval image in which a human body is located;
the terminal carries out depth gating on the depth image, specifically carries out depth division on the image to extract image characteristics, then distinguishes a depth interval where a human body is located according to the image characteristics, namely a target depth interval, and then extracts a target depth interval image corresponding to the target depth interval from the depth image.
In a scene where sitting posture recognition is actually performed, a human is close to the terminal, so that a large part of the region in the depth image is occupied by the human body, and the terminal can distinguish the depth section where the human body is located through the characteristic. Specifically, the terminal divides the depth image into a plurality of depth intervals to obtain a plurality of depth interval images, the length of each depth interval is not limited, preferably 100mm, then the target depth interval where the human body is located can be distinguished according to the image characteristics of the images of the different depth intervals, and then the corresponding target depth interval image is extracted.
b. Determining a main body area in the target depth interval image;
for a target depth interval image containing most characteristics of a human body, a terminal needs to perform region segmentation processing to determine a main region in the target depth interval image, wherein the main region is a region from a chest to an abdomen of the human body and is also a key region for sitting posture identification.
c. Calculating the dispersion degree of the effective points of the main body area through a target formula to obtain a first calculation result;
after the main body area in the target depth interval image is determined, the terminal calculates the degree of dispersion of the effective point for the main body area, and the effective point in the present application is a point whose pixel value is not 0.
The target formula is:
var=∑(x-average(x))2/n;
average(x)=∑x/n;
where var represents the first calculation result, that is, the variance of the effective points in the body region, x represents the abscissa of the effective points in the body region, and n represents the number of effective points in the body region.
The terminal specifically reflects the dispersion degree of the effective points by calculating the variance of the effective points in the main body region, the terminal calculates the number of the effective points by dividing the sum of squares of the average abscissas of the effective points in the main body region subtracted by the abscissa of each effective point in the main body region, and the smaller the calculated variance of the effective points is, the smaller the dispersion degree of the effective points is, or vice versa.
d. And judging whether occlusion exists in the depth image according to the first calculation result.
The target depth interval image is obtained by performing depth division on the depth image, the target depth interval image corresponds to a depth interval where a human body is located, the depth lens shoots towards a user, under the condition that no shielding exists, the depth lens can shoot a relatively complete human body, effective points which are reflected on the target depth interval image and are shown as a main body area are connected into a whole, namely the discrete degree of the main body area is small. However, under the condition of existence of shielding, the shielded part of the human body cannot be shot, the shielding position can cause depth separation, the image is represented as an area with blank on the target depth interval image, the effective points of the main body area cannot be connected into one piece, and the discrete degree is large. Therefore, the discrete degree of the main body area can reflect whether the target depth interval image has depth separation, namely whether occlusion exists.
The terminal judges the discrete degree of the main body area, namely whether the first calculation result is greater than a first threshold value, if so, the occlusion is determined to exist, at this moment, the user needs to be reminded to remove the occlusion and the step 601 is returned to obtain the depth image again. When the first calculation result is less than or equal to the first threshold, it indicates that there is no occlusion currently, and at this time, step 607 may be executed to perform sitting posture identification.
It should be noted that a specific value of the first threshold is not limited here, and the value of the first threshold is different in different application scenarios, and specifically needs to be determined through testing, and the first threshold is a critical point that can distinguish a discrete degree of a point (effective point) of a human body after occlusion from a point before occlusion.
It should be noted that, in some special cases, for example, when the user wears thick clothes, the discrete degree of the main body area may be too large due to the wrinkles of the clothes, so that the terminal may determine whether there is occlusion according to the discrete degree of the main body area, and may also determine by combining the effective dot proportion of the edge area, where the edge area refers to the area above the chest, below the shoulders, and below the abdomen of the human body. Only when the discrete degree of the main area is greater than the first threshold and the effective point ratio of the edge area is less than the second threshold, it is determined that there is occlusion, and the rest cases are determined that there is no occlusion, and step 607 may be performed to perform sitting posture recognition.
607. And carrying out sitting posture identification according to the forward and backward inclination parameters and the leftward and rightward inclination parameters.
And the sitting posture recognition is carried out only when the terminal determines that the occlusion does not exist in the depth image, so that the accuracy of the sitting posture recognition is improved.
In the embodiment, the human body fitting in the depth image is simplified into a plane model (target plane), the forward and backward inclination degree of the human body is quantified through the normal vector direction of the target plane, and the leftward and rightward inclination degree of the human body is quantified through the horizontal coordinate difference between the vertex and the centroid in the depth image, so that the human body sitting posture identification can be realized. According to the sitting posture identification method based on the TOF camera, a huge training set is not needed for establishing a model, an excessively deep convolution network is not needed, and a user is not needed to preset a standard sitting posture in advance, so that the calculation power consumption is low, the user is convenient to use, and the real-time performance is high.
Furthermore, an interval image (target depth interval image) where the human body is located in the depth image for shooting the sitting posture of the human body is extracted, discrete degree calculation is carried out on a main body area in the target depth interval image, whether shielding exists in the depth image is judged through a calculation result, therefore, interference caused by shielding is eliminated in the sitting posture identification process based on the depth image, and the anti-interference capability and the identification accuracy of the sitting posture identification are improved.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating an embodiment of a sitting posture recognition apparatus based on a TOF camera according to the present application, the apparatus including:
an acquiring unit 701, configured to acquire a depth image of a human body sitting posture;
the processing unit 702 is configured to perform region growing by using the centroid of the depth image as a seed point to obtain a target image;
a fitting unit 703, configured to fit the effective points in the target image to a target plane;
a first determining unit 704, configured to determine a pitch parameter through a normal vector orientation of the target plane;
a second determining unit 705, configured to determine a left-right inclination parameter according to a distance between the vertices and the centroids in the target image;
and the recognition unit 706 is used for carrying out sitting posture recognition according to the forward-backward inclination parameter and the leftward-rightward inclination parameter.
In this embodiment, the human body in the depth image is fitted and simplified into a plane model (target plane) by the processing unit 702 and the fitting unit 703, the forward and backward inclination degree of the human body is quantized by the normal vector of the target plane through the first determining unit 704, the left and right inclination degree of the human body is quantized by the horizontal coordinate difference between the vertex and the centroid in the depth image through the second determining unit 705, and the human body sitting posture recognition is realized by the recognition unit 706. In the sitting posture recognition device based on the TOF camera, a huge training set is not needed to establish a model, an excessively deep convolution network is not needed, and a user is not needed to preset a standard sitting posture in advance, so that the calculation power consumption is low, the user is convenient to use, and the real-time performance is strong.
Referring to fig. 8, a detailed description will be given below of a sitting posture recognition apparatus based on a TOF camera provided in the present application, where fig. 8 is another embodiment of the sitting posture recognition apparatus based on a TOF camera provided in the present application, the apparatus includes:
an acquiring unit 801, configured to acquire a depth image of a human body sitting posture;
the processing unit 802 is configured to perform region growing by using the centroid of the depth image as a seed point to obtain a target image;
a fitting unit 803, configured to fit the effective points in the target image to a target plane;
a first determining unit 804, configured to determine a pitch parameter through a normal vector direction of the target plane;
a second determining unit 805 for determining a roll parameter by a distance between the vertices and the centroids in the target image;
and the recognition unit 806 is used for carrying out sitting posture recognition according to the forward-backward inclination parameter and the leftward-rightward inclination parameter.
Optionally, the apparatus further comprises:
an erosion unit 807 for performing erosion processing on the target image until the vertex ordinate in the target image changes abruptly.
Optionally, the apparatus further comprises:
a first judging unit 808, configured to judge whether the human body in the target image is in the optimal recognition area according to the abscissa of the vertex;
the fitting unit 803 is specifically configured to:
when the determination result of the first determination unit 808 is yes, the effective points in the target image are fitted to the target plane.
Optionally, the apparatus further comprises:
and a prompting unit 809 for prompting the user to adjust according to the abscissa of the vertex when the judgment result of the first judging unit is negative.
Optionally, the fitting unit 803 is further specifically configured to:
fitting effective points in the target image by a least square method to obtain a fitting plane;
calculating the variance and standard deviation of the depth values of the fitting plane and the target image;
removing outliers in the target image, wherein the outliers are points exceeding a preset multiple standard deviation;
and fitting the effective points in the target image again until the preset conditions are met to obtain a target plane.
Optionally, the preset conditions are:
no outlier exists in the target image;
or the like, or, alternatively,
the number of fits reaches the maximum number of fits.
Optionally, the apparatus further comprises:
a third determining unit 810, configured to determine an occlusion parameter according to a feature of the depth image;
the identifying unit 806 is specifically configured to:
and carrying out sitting posture identification according to the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters.
Optionally, the identifying unit 806 is further specifically configured to:
converting the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters into confidence parameters;
and determining the sitting posture type of the user according to the confidence parameter.
Optionally, the apparatus further comprises:
a second judging unit 811 for judging whether occlusion exists in the depth image;
the identifying unit 806 is specifically configured to:
when the determination result of the second determination unit 811 is no, the sitting posture recognition is performed based on the forward-backward inclination parameter and the leftward-rightward inclination parameter.
Optionally, the second determining unit 811 is specifically configured to:
extracting a target depth interval image from the depth image, wherein the target depth interval image is a depth interval image in which a human body is located;
determining a main body area in the target depth interval image;
calculating the dispersion degree of the effective points of the main body area through a target formula to obtain a first calculation result;
and judging whether occlusion exists in the depth image according to the first calculation result.
In the device of this embodiment, the functions of each unit correspond to the steps in the method embodiments shown in fig. 4 or fig. 5, and are not described herein again.
Fig. 9 shows an embodiment of a sitting posture recognition apparatus based on a TOF camera, where fig. 9 is an embodiment of the sitting posture recognition apparatus based on a TOF camera provided by the present application, and the apparatus includes:
a processor 901, a memory 902, an input-output unit 903, a bus 904;
the processor 901 is connected to the memory 902, the input/output unit 903 and the bus 904;
the memory 902 holds a program that the processor 901 calls to perform any of the TOF camera-based sitting posture recognition methods described above.
The application also relates to an intelligent desk lamp which executes any sitting posture identification method based on the TOF camera.
The present application also relates to a computer-readable storage medium having a program stored thereon, wherein the program, when executed on a computer, causes the computer to perform any of the TOF camera based sitting posture recognition methods described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (14)

1. A sitting posture identification method based on a TOF camera, which is characterized by comprising the following steps:
acquiring a depth image of a human body sitting posture;
performing region growth by taking the mass center of the depth image as a seed point to obtain a target image;
fitting the effective points in the target image into a target plane;
determining a forward-backward tilt parameter through the normal vector direction of the target plane;
determining a left-right inclination parameter according to the horizontal coordinate difference of the vertex and the centroid in the target image;
and carrying out sitting posture identification according to the forward and backward inclination parameters and the leftward and rightward inclination parameters.
2. The method of claim 1, wherein after the performing region growing with the centroid of the depth image as a seed point to obtain a target image, and before the fitting the valid points in the target image to a target plane, the method further comprises:
and carrying out erosion processing on the target image until the vertex ordinate in the target image changes suddenly.
3. The method of claim 2, wherein after the erosion processing of the target image until the vertex ordinate in the target image changes abruptly, the method further comprises:
judging whether the human body in the target image is in an optimal identification area or not according to the abscissa of the vertex;
the fitting of the significant points in the target image to a target plane comprises:
and when the human body in the target image is in the optimal recognition area, fitting the effective points in the target image into a target plane.
4. The method according to claim 3, wherein after the determining whether the human body is in the optimal recognition area in the target image according to the abscissa of the vertex, the method further comprises:
if not, prompting the user to adjust according to the abscissa of the vertex.
5. The method of claim 1, wherein fitting the significant points in the target image to a target plane comprises:
fitting the effective points in the target image by a least square method to obtain a fitting plane;
calculating the variance and standard deviation of the fitting plane and the depth value of the target image;
removing outliers in the target image, wherein the outliers are points exceeding a preset multiple of the standard deviation;
and fitting the effective points in the target image again until a preset condition is met to obtain a target plane.
6. The method according to claim 5, wherein the preset condition is:
no outlier is found in the target image;
or the like, or, alternatively,
the fitting times reach the maximum fitting times.
7. The method of any of claims 1-6, wherein prior to the sitting posture identification from the forward-backward and leftward-backward leaning parameters, the method further comprises:
determining an occlusion parameter according to the features of the depth image;
the sitting posture identification according to the forward-backward inclination parameter and the leftward-rightward inclination parameter comprises the following steps:
and carrying out sitting posture identification according to the forward and backward inclination parameters, the leftward and rightward inclination parameters and the shielding parameters.
8. The method of claim 7, wherein the sitting posture identification from the forward-backward tilt parameter, the left-right tilt parameter, and the occlusion parameter comprises:
converting the forward-backward tilt parameter, the leftward-rightward tilt parameter and the shielding parameter into confidence parameters;
and determining the sitting posture type of the user according to the confidence parameter.
9. The method of any of claims 1-6, wherein prior to the sitting posture identification from the forward-backward and leftward-backward leaning parameters, the method further comprises:
judging whether occlusion exists in the depth image;
the sitting posture identification according to the forward-backward inclination parameter and the leftward-rightward inclination parameter comprises the following steps:
if not, carrying out sitting posture identification according to the forward and backward leaning parameters and the leftward and rightward leaning parameters.
10. The method of claim 9, wherein the determining whether an occlusion exists in the depth image comprises:
extracting a target depth interval image from the depth image, wherein the target depth interval image is a depth interval image in which a human body is located;
determining a subject region in the target depth interval image;
calculating the dispersion degree of the effective points of the main body area through a target formula to obtain a first calculation result;
and judging whether occlusion exists in the depth image according to the first calculation result.
11. An intelligent desk lamp, characterized in that the intelligent desk lamp performs the method according to any one of claims 1 to 10.
12. A sitting posture identifying apparatus based on a TOF camera, the apparatus comprising:
the acquisition unit is used for acquiring a depth image of the human body sitting posture;
the processing unit is used for carrying out region growth by taking the mass center of the depth image as a seed point to obtain a target image;
the fitting unit is used for fitting the effective points in the target image into a target plane;
the first determining unit is used for determining a forward-backward inclination parameter through the normal vector direction of the target plane;
the second determining unit is used for determining a left-right inclination parameter through the horizontal coordinate difference of the vertex and the centroid in the target image;
and the recognition unit is used for recognizing the sitting posture according to the forward-backward inclination parameter and the leftward-rightward inclination parameter.
13. A sitting posture identifying apparatus based on a TOF camera, the apparatus comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform the method of any one of claims 1 to 10.
14. A computer-readable storage medium having a program stored thereon, which when executed on a computer performs the method of any one of claims 1 to 10.
CN202210299958.XA 2022-03-25 2022-03-25 Sitting posture identification method and device based on TOF camera and intelligent desk lamp Active CN114612939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210299958.XA CN114612939B (en) 2022-03-25 2022-03-25 Sitting posture identification method and device based on TOF camera and intelligent desk lamp

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210299958.XA CN114612939B (en) 2022-03-25 2022-03-25 Sitting posture identification method and device based on TOF camera and intelligent desk lamp

Publications (2)

Publication Number Publication Date
CN114612939A true CN114612939A (en) 2022-06-10
CN114612939B CN114612939B (en) 2023-01-10

Family

ID=81866536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210299958.XA Active CN114612939B (en) 2022-03-25 2022-03-25 Sitting posture identification method and device based on TOF camera and intelligent desk lamp

Country Status (1)

Country Link
CN (1) CN114612939B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909394A (en) * 2022-10-25 2023-04-04 珠海视熙科技有限公司 Sitting posture identification method and device, intelligent desk lamp and computer storage medium
CN116152853A (en) * 2022-11-30 2023-05-23 珠海视熙科技有限公司 Sitting posture detection method and device, intelligent table lamp and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120647A (en) * 2010-12-07 2012-06-28 Alpha Co Posture detection system
JP2013049102A (en) * 2011-08-30 2013-03-14 Denso Wave Inc Robot control device and method of determining robot attitude
CN103793680A (en) * 2012-10-29 2014-05-14 北京三星通信技术研究有限公司 Apparatus and method for estimating head poses
CN104217554A (en) * 2014-09-19 2014-12-17 武汉理工大学 Reminding system and method for health study posture for student
KR101722131B1 (en) * 2015-11-25 2017-03-31 국민대학교 산학협력단 Posture and Space Recognition System of a Human Body Using Multimodal Sensors
CN107071339A (en) * 2016-12-01 2017-08-18 合肥大多数信息科技有限公司 A kind of intelligent seat system monitored based on depth camera and its implementation
CN107122754A (en) * 2017-05-09 2017-09-01 苏州迪凯尔医疗科技有限公司 Posture identification method and device
CN108614990A (en) * 2018-03-06 2018-10-02 清华大学 A kind of child sitting gesture detection intelligent interaction device system and method
CN109214995A (en) * 2018-08-20 2019-01-15 阿里巴巴集团控股有限公司 The determination method, apparatus and server of picture quality
CN109903332A (en) * 2019-01-08 2019-06-18 杭州电子科技大学 A kind of object's pose estimation method based on deep learning
CN111178190A (en) * 2019-12-17 2020-05-19 中国科学院深圳先进技术研究院 Target detection method and device based on depth image and storage medium
CN111292261A (en) * 2020-01-17 2020-06-16 杭州电子科技大学 Container detection and locking method based on multi-sensor fusion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120647A (en) * 2010-12-07 2012-06-28 Alpha Co Posture detection system
JP2013049102A (en) * 2011-08-30 2013-03-14 Denso Wave Inc Robot control device and method of determining robot attitude
CN103793680A (en) * 2012-10-29 2014-05-14 北京三星通信技术研究有限公司 Apparatus and method for estimating head poses
CN104217554A (en) * 2014-09-19 2014-12-17 武汉理工大学 Reminding system and method for health study posture for student
KR101722131B1 (en) * 2015-11-25 2017-03-31 국민대학교 산학협력단 Posture and Space Recognition System of a Human Body Using Multimodal Sensors
CN107071339A (en) * 2016-12-01 2017-08-18 合肥大多数信息科技有限公司 A kind of intelligent seat system monitored based on depth camera and its implementation
CN107122754A (en) * 2017-05-09 2017-09-01 苏州迪凯尔医疗科技有限公司 Posture identification method and device
CN108614990A (en) * 2018-03-06 2018-10-02 清华大学 A kind of child sitting gesture detection intelligent interaction device system and method
CN109214995A (en) * 2018-08-20 2019-01-15 阿里巴巴集团控股有限公司 The determination method, apparatus and server of picture quality
CN109903332A (en) * 2019-01-08 2019-06-18 杭州电子科技大学 A kind of object's pose estimation method based on deep learning
CN111178190A (en) * 2019-12-17 2020-05-19 中国科学院深圳先进技术研究院 Target detection method and device based on depth image and storage medium
CN111292261A (en) * 2020-01-17 2020-06-16 杭州电子科技大学 Container detection and locking method based on multi-sensor fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SJAN-MARI VAN NIEKERK 等: "Photographic measurement of upper-body sitting posture of high school students: A reliability and validity study", 《HTTP://WWW.BIOMEDCENTRAL.COM/1471-2474/9/113》 *
YONG ZHANG 等: "Human Activity Recognition Based on Motion Sensor Using U-Net", 《IEEE》 *
曾星 等: "基于深度传感器的坐姿检测系统", 《计算机科学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909394A (en) * 2022-10-25 2023-04-04 珠海视熙科技有限公司 Sitting posture identification method and device, intelligent desk lamp and computer storage medium
CN115909394B (en) * 2022-10-25 2024-04-05 珠海视熙科技有限公司 Sitting posture identification method and device, intelligent table lamp and computer storage medium
CN116152853A (en) * 2022-11-30 2023-05-23 珠海视熙科技有限公司 Sitting posture detection method and device, intelligent table lamp and storage medium

Also Published As

Publication number Publication date
CN114612939B (en) 2023-01-10

Similar Documents

Publication Publication Date Title
CN114612939B (en) Sitting posture identification method and device based on TOF camera and intelligent desk lamp
US6526161B1 (en) System and method for biometrics-based facial feature extraction
CN108107571B (en) Image processing apparatus and method, and non-transitory computer-readable recording medium
CN106846403B (en) Method and device for positioning hand in three-dimensional space and intelligent equipment
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
JP4728432B2 (en) Face posture estimation device, face posture estimation method, and face posture estimation program
US10469829B2 (en) Information processor and information processing method
US9818226B2 (en) Method for optimizing occlusion in augmented reality based on depth camera
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN111652086B (en) Face living body detection method and device, electronic equipment and storage medium
CN110214340A (en) Use the refinement of the structure light depth map of rgb color data
CN111105366B (en) Image processing method and device, terminal equipment and storage medium
WO2015188666A1 (en) Three-dimensional video filtering method and device
TW201928875A (en) Light spot filtering method and apparatus
KR101783999B1 (en) Gesture recognition using chroma-keying
CN109936697A (en) A kind of video capture method for tracking target and device
CN103049911A (en) Contour detection stability judging method and image searching method
JP5111321B2 (en) 瞼 Likelihood calculation device and program
CN113744307A (en) Image feature point tracking method and system based on threshold dynamic adjustment
CN111274851A (en) Living body detection method and device
JP3062181B1 (en) Real-time facial expression detection device
CN106446859B (en) Utilize the method for stain and the trace of blood in mobile phone front camera automatic identification human eye
CN112883940A (en) Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium
US11475629B2 (en) Method for 3D reconstruction of an object
CN110678905A (en) Apparatus and method for processing depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant