CN111814700B - Behavior action recognition method based on child behavior characteristics - Google Patents

Behavior action recognition method based on child behavior characteristics Download PDF

Info

Publication number
CN111814700B
CN111814700B CN202010670122.7A CN202010670122A CN111814700B CN 111814700 B CN111814700 B CN 111814700B CN 202010670122 A CN202010670122 A CN 202010670122A CN 111814700 B CN111814700 B CN 111814700B
Authority
CN
China
Prior art keywords
point
line
difference
child
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010670122.7A
Other languages
Chinese (zh)
Other versions
CN111814700A (en
Inventor
张云龙
张云凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yuelin Information Technology Co ltd
Original Assignee
Suzhou Yuelin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yuelin Information Technology Co ltd filed Critical Suzhou Yuelin Information Technology Co ltd
Priority to CN202010670122.7A priority Critical patent/CN111814700B/en
Publication of CN111814700A publication Critical patent/CN111814700A/en
Application granted granted Critical
Publication of CN111814700B publication Critical patent/CN111814700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a behavior action recognition algorithm based on child behavior characteristics, which comprises the following steps: step one: the image acquisition terminal acquires images of the activities of children and performs modeling processing on the images of the activities of the children, wherein the specific process of the modeling processing of the images is as follows: extracting a child moving image, extracting a clear whole-body photograph of the child from the child moving image, scratching the whole-body photograph out of the background, and marking the whole-body photograph as a reference model; step two: performing feature point determination processing on the processed child image model to determine a plurality of feature points; step three: determining an identification reference plane, and carrying out connection processing on the corresponding characteristic points to obtain a judgment characteristic line, wherein the reference plane is the ground in the children's moving image; step four: and analyzing and processing the reference plane and the judging line to judge the child behavior action. The invention can more accurately and rapidly identify the actions, is more suitable for children, and meets the use requirements of users.

Description

Behavior action recognition method based on child behavior characteristics
Technical Field
The invention relates to the field of action recognition, in particular to a behavior action recognition method based on child behavior characteristics.
Background
The motion recognition, i.e., determining the motion of a person from an image, has a wide range of applications, including motion recognition in motion capture, and motion recognition processing needs to be performed using a motion recognition algorithm in motion recognition.
The existing action recognition algorithm is mostly applied to adults, is not suitable for children, and cannot meet the use requirement of users when the recognition speed and accuracy are not high enough, so that the action recognition method based on the child action characteristics is provided.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: how to solve the problems that the existing action recognition algorithm is mostly applied to adults, is not suitable for children, is not high enough in recognition speed and accuracy and cannot meet the use requirement of users, and provides a behavior action recognition method based on the behavior characteristics of children.
The invention solves the technical problems through the following technical proposal, and the invention comprises the following steps:
step one: the image acquisition terminal acquires images of the activities of children and performs modeling processing on the images of the activities of the children, wherein the specific process of the modeling processing of the images is as follows: extracting a child moving image, extracting a clear whole-body photograph of the child from the child moving image, scratching the whole-body photograph out of the background, and marking the whole-body photograph as a reference model;
step two: and carrying out feature point determination processing on the processed child image model to determine a plurality of feature points, wherein the specific process of feature point determination is as follows: marking the highest point in the reference model as a head vertex P point, marking the vertexes of two shoulders in the reference model as an A1 point and a B1 point, marking the two arm bending points in the reference model as an A2 point and a B2 point, marking the two middle finger end points in the reference model as an A3 point and a B3 point, marking the point of connection between two legs and a trunk in the reference model as an A4 point and a B4 point, marking two knees in the reference model as a point A5 and a point B5, marking the two foot lowest points in the reference model as a point A6 and a point B6, wherein the point A1, the point A2, the point A3, the point A4, the point A5 and the point A6 are on the same side, the point B1, the point B2, the point B3, the point B4, the point B5 and the point B6 are on the same side, and marking the two mouth corner points at the face in the reference model as a point W1 and a point W2 respectively;
step three: determining an identification reference plane, carrying out connection processing on the corresponding feature points to obtain a judgment feature line, wherein the reference plane is the ground in the children's moving image, and the connection processing process of the corresponding feature points is as follows:
s1: connecting the point A1 with the point A2 to obtain a K1 line, connecting the point A2 with the point A3 to obtain a K2 line, and connecting the point A4 with the point A5 to obtain a K3 line;
s2: connecting the point B1 with the point B2 to obtain an M1 line, connecting the point B2 with the point B3 to obtain an M2 line, and connecting the point B4 with the point B5 to obtain an M3 line;
s3: connecting the point W1 with the point W2 to obtain a mouth line U, and marking the mouth line U as a Z line by taking the point P as an end point to form a line segment perpendicular to the reference plane;
step four: analyzing and processing the reference plane and the judging line to judge the child behavior action;
step five: analyzing the reference surface and the judging line to judge whether the child climbs, runs, falls, crawls or bites hands;
step six: after judging the child behavior, sending a child behavior message to a prompt terminal;
the climbing judgment process in the fifth step is specifically as follows:
step (1): marking the height of the datum line as 0, measuring the height difference between the bottom end of the K3 line and the datum line to obtain a height difference Q1 Difference of difference
Step (2): measuring the height difference between the bottom end of the M3 line and the datum line to obtain a height difference Q2 Difference of difference
Step (3): continuously collecting multiple height differences Q1 Difference of difference And height difference Q2 Difference of difference
Step (4): when the height is different from Q1 Difference of difference And height difference Q2 Difference of difference All exceed a preset value and the height difference Q1 Difference of difference And height difference Q2 Difference of difference The child behavior is judged to be climbing when the times exceeding the preset value exceed the preset times;
the running action judging process in the fifth step is as follows:
step 1: connecting the K2 line with the K3 line to obtain a line K Closing device Connecting the M2 line with the M3 line to obtain a line M Closing device
Step 2: the line K is collected in a preset time period Closing device And line M Closing device The number of crossings, marked as Ct;
step 3: marking a preset time length as T, wherein the time length is in seconds, and the formula Ct/T=CT is adopted Are all Obtaining the CT number of the crossing per second Are all
Step 4: when CT Are all When the running speed is larger than the preset value, the child is judged to run;
the specific process of the fall judgment in the fifth step is as follows:
SS1: measuring the angle difference between the Z line and the reference plane, and indicating that the child is in an upright state when the angle difference between the Z line and the reference plane is 90 degrees;
SS2: when the angle difference between the Z line and the reference plane is increased or decreased within a preset time length and exceeds a preset value, the falling of the child is judged;
the crawling state judging process in the fifth step specifically comprises the following steps:
and measuring the angle difference between the Z line and the reference surface, and when the included angle between the Z line and the reference surface is within a preset angle range, horizontally moving the Z line on the reference surface, namely judging that the child is crawling.
Preferably, the specific determining process of the hand biting in the fifth step is as follows:
a: extracting a K1 line and an M1 line, and measuring the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U in real time;
b: marking the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U as Rt;
c: setting a judgment threshold Xt, calculating the difference between the distance Rt of the lip hand and the judgment threshold Et to obtain Rx Difference of difference
d: when Rx is Difference of difference When the absolute value of the (a) is smaller than the preset value, the child behavior is judged to be biting.
Preferably, the prompting terminal comprises a smart phone and a smart tablet computer.
Compared with the prior art, the invention has the following advantages: according to the behavior action recognition method based on the child behavior characteristics, when the behavior action recognition method is used, modeling lineization processing is carried out through the child image, analysis is carried out on the child action through analysis processing on the established model and the line, the accuracy of action recognition is improved while the action recognition analysis is quickened, the child action can be better analyzed and recognized through the algorithm, parents can be timely reminded after climbing, running, tumbling, crawling or biting actions of the child are analyzed, accidents are effectively reduced, and the algorithm is more worthy of popularization and use.
Drawings
Fig. 1 is an overall structure diagram 1 of the present invention.
Detailed Description
The following describes in detail the examples of the present invention, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of protection of the present invention is not limited to the following examples.
As shown in fig. 1, this embodiment provides a technical solution: a behavior action recognition method based on child behavior characteristics comprises the following steps:
step one: the image acquisition terminal acquires images of the activities of children and performs modeling processing on the images of the activities of the children, wherein the specific process of the modeling processing of the images is as follows: extracting a child moving image, extracting a clear whole-body photograph of the child from the child moving image, scratching the whole-body photograph out of the background, and marking the whole-body photograph as a reference model;
step two: and carrying out feature point determination processing on the processed child image model to determine a plurality of feature points, wherein the specific process of feature point determination is as follows: marking the highest point in the reference model as a head vertex P point, marking the vertexes of two shoulders in the reference model as an A1 point and a B1 point, marking the two arm bending points in the reference model as an A2 point and a B2 point, marking the two middle finger end points in the reference model as an A3 point and a B3 point, marking the point of connection between two legs and a trunk in the reference model as an A4 point and a B4 point, marking two knees in the reference model as a point A5 and a point B5, marking the two foot lowest points in the reference model as a point A6 and a point B6, wherein the point A1, the point A2, the point A3, the point A4, the point A5 and the point A6 are on the same side, the point B1, the point B2, the point B3, the point B4, the point B5 and the point B6 are on the same side, and marking the two mouth corner points at the face in the reference model as a point W1 and a point W2 respectively;
step three: determining an identification reference plane, carrying out connection processing on the corresponding feature points to obtain a judgment feature line, wherein the reference plane is the ground in the children's moving image, and the connection processing process of the corresponding feature points is as follows:
s1: connecting the point A1 with the point A2 to obtain a K1 line, connecting the point A2 with the point A3 to obtain a K2 line, and connecting the point A4 with the point A5 to obtain a K3 line;
s2: connecting the point B1 with the point B2 to obtain an M1 line, connecting the point B2 with the point B3 to obtain an M2 line, and connecting the point B4 with the point B5 to obtain an M3 line;
s3: connecting the point W1 with the point W2 to obtain a mouth line U, and marking the mouth line U as a Z line by taking the point P as an end point to form a line segment perpendicular to the reference plane;
step four: analyzing and processing the reference plane and the judging line to judge the child behavior action;
step five: analyzing the reference surface and the judging line to judge whether the child climbs, runs, falls, crawls or bites hands;
step six: and after the child behaviors are determined, sending a child behavior message to the prompt terminal.
The climbing judgment process in the fifth step is specifically as follows:
step (1): marking the height of the datum line as 0, measuring the height difference between the bottom end of the K3 line and the datum line to obtain a height difference Q1 Difference of difference
Step (2): measuring the height difference between the bottom end of the M3 line and the datum line to obtain a height difference Q2 Difference of difference
Step (3): continuously collecting multiple height differences Q1 Difference of difference And height difference Q2 Difference of difference
Step (4): when the height is different from Q1 Difference of difference And height difference Q2 Difference of difference All exceed a preset value and the height difference Q1 Difference of difference And height difference Q2 Difference of difference And when the times exceeding the preset value exceed the preset times, judging that the child behaviors are climbing.
The method can better judge the climbing actions, and most of the climbing actions are performed by children, so that the algorithm can perform action recognition for the children.
The running action judging process in the fifth step is as follows:
step 1: connecting the K2 line with the K3 line to obtain a line K Closing device Connecting the M2 line with the M3 line to obtain a line M Closing device
Step 2: the line K is collected in a preset time period Closing device And line M Closing device The number of crossings, marked as Ct;
step 3: marking a preset time length as T, wherein the time length is in seconds, and the formula Ct/T=CT is adopted Are all Obtaining the CT number of the crossing per second Are all
Step 4: when CT Are all And when the running speed is larger than the preset value, judging that the child runs.
And running action judgment is carried out, so that parents can know the real-time state of children at home and give prompt in time, and the running and falling injury of the children is avoided.
The specific process of the fall judgment in the fifth step is as follows:
SS1: measuring the angle difference between the Z line and the reference plane, and indicating that the child is in an upright state when the angle difference between the Z line and the reference plane is 90 degrees;
SS2: and when the angle difference between the Z line and the reference plane is increased or decreased within the preset time period to exceed the preset value, judging that the child falls down.
And the falling action is judged, so that the safety of children is better protected.
The crawling state judging process in the fifth step specifically comprises the following steps:
and measuring the angle difference between the Z line and the reference surface, and when the included angle between the Z line and the reference surface is within a preset angle range, horizontally moving the Z line on the reference surface, namely judging that the child is crawling.
The specific judging process of the hand biting in the fifth step is as follows:
a: extracting a K1 line and an M1 line, and measuring the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U in real time;
b: marking the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U as Rt;
c: is provided withSetting a judgment threshold Xt, calculating the difference between the distance Rt of the lip and the judgment threshold Et to obtain Rx Difference of difference
d: when Rx is Difference of difference When the absolute value of the (a) is smaller than the preset value, the child behavior is judged to be biting.
The prompting terminal comprises a smart phone and a smart tablet computer.
In summary, when the invention is used, the image acquisition terminal is used for acquiring the images of the activities of children, and the modeling process of the images of the activities of children is carried out, wherein the specific process of the modeling process of the images is as follows: extracting a child moving image, extracting a clear whole body photo of the child from the moving image, marking the whole body photo as a reference model, carrying out characteristic point determining processing on the processed child image model, determining a plurality of characteristic points, marking the highest point in the reference model as a head vertex P point, marking the vertices of two shoulders in the reference model as A1 point and B1 point, marking two arm bending points in the reference model as A2 point and B2 point, marking two middle finger end points in the reference model as A3 point and B3 point, marking the point connected with two legs and a trunk in the reference model as A4 point and B4 point, marking two knees in the reference model as A5 and B5 point, marking the lowest points of two feet in the reference model as A6 and B6 point, wherein the points A1, A2, A4, A5 and A6 are on the same side, the points B1, B2, B3, B4 and B6 are on the same side, marking the corresponding characteristic points in the reference model as a connecting line of the two points W, and determining the characteristic points in the reference model, and processing the corresponding characteristic points in the reference model as a ground surface, respectively, and determining the characteristic points in the face surface to be the ground surface: connecting a point A1 with a point A2 to obtain a K1 line, connecting a point A2 with a point A3 to obtain a K2 line, connecting a point A4 with a point A5 to obtain a K3 line, connecting a point B1 with a point B2 to obtain an M1 line, connecting a point B2 with a point B3 to obtain an M2 line, connecting a point B4 with a point B5 to obtain an M3 line, connecting a point W1 with a point W2 to obtain a mouth line U, finally taking a point P as an endpoint to make a line segment perpendicular to a reference plane to mark the mouth line as a Z line, analyzing the reference plane and a judging line to judge whether a child climbs, runs, falls, crawls or bites, and sending a child behavior message to a prompt terminal after judging the child behavior.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (2)

1. The behavior action recognition method based on the child behavior characteristics is characterized by comprising the following steps of:
step one: the image acquisition terminal acquires images of the activities of children and performs modeling processing on the images of the activities of the children, wherein the specific process of the modeling processing of the images is as follows: extracting a child moving image, extracting a clear whole-body photograph of the child from the child moving image, scratching the whole-body photograph out of the background, and marking the whole-body photograph as a reference model;
step two: and carrying out feature point determination processing on the processed child image model to determine a plurality of feature points, wherein the specific process of feature point determination is as follows: marking the highest point in the reference model as a head vertex P point, marking the vertexes of two shoulders in the reference model as an A1 point and a B1 point, marking the two arm bending points in the reference model as an A2 point and a B2 point, marking the two middle finger end points in the reference model as an A3 point and a B3 point, marking the point of connection between two legs and a trunk in the reference model as an A4 point and a B4 point, marking two knees in the reference model as a point A5 and a point B5, marking the two foot lowest points in the reference model as a point A6 and a point B6, wherein the point A1, the point A2, the point A3, the point A4, the point A5 and the point A6 are on the same side, the point B1, the point B2, the point B3, the point B4, the point B5 and the point B6 are on the same side, and marking the two mouth corner points at the face in the reference model as a point W1 and a point W2 respectively;
step three: determining an identification reference plane, carrying out connection processing on the corresponding feature points to obtain a judgment feature line, wherein the reference plane is the ground in the children's moving image, and the connection processing process of the corresponding feature points is as follows:
s1: connecting the point A1 with the point A2 to obtain a K1 line, connecting the point A2 with the point A3 to obtain a K2 line, and connecting the point A4 with the point A5 to obtain a K3 line;
s2: connecting the point B1 with the point B2 to obtain an M1 line, connecting the point B2 with the point B3 to obtain an M2 line, and connecting the point B4 with the point B5 to obtain an M3 line;
s3: connecting the point W1 with the point W2 to obtain a mouth line U, and marking the mouth line U as a Z line by taking the point P as an end point to form a line segment perpendicular to the reference plane;
step four: analyzing and processing the reference plane and the judging line to judge the child behavior action;
step five: analyzing the reference surface and the judging line to judge whether the child climbs, runs, falls, crawls or bites hands;
step six: after judging the child behavior, sending a child behavior message to a prompt terminal;
the climbing judgment process in the fifth step is specifically as follows:
step (1): marking the height of the datum line as 0, measuring the height difference between the bottom end of the K3 line and the datum line to obtain a height difference Q1 Difference of difference
Step (2): measuring the height difference between the bottom end of the M3 line and the datum line to obtain a height difference Q2 Difference of difference
Step (3): continuously collecting multiple height differences Q1 Difference of difference And height difference Q2 Difference of difference
Step (4): when the height is different from Q1 Difference of difference And height difference Q2 Difference of difference All exceed a preset value and the height difference Q1 Difference of difference And height difference Q2 Difference of difference The child behavior is judged to be climbing when the times exceeding the preset value exceed the preset times;
the running action judging process in the fifth step is as follows:
step 1: connecting the K2 line with the K3 line to obtain a line K Closing device Connecting the M2 line with the M3 line to obtain a line M Closing device
Step 2: the line K is collected in a preset time period Closing device And line M Closing device The number of crossings, marked as Ct;
step 3: marking a preset time length as T, wherein the time length is in seconds, and the formula Ct/T=CT is adopted Are all Obtaining the CT number of the crossing per second Are all
Step 4: when CT Are all When the running speed is larger than the preset value, the child is judged to run;
the specific process of the fall judgment in the fifth step is as follows:
SS1: measuring the angle difference between the Z line and the reference plane, and indicating that the child is in an upright state when the angle difference between the Z line and the reference plane is 90 degrees;
SS2: when the angle difference between the Z line and the reference plane is increased or decreased within a preset time length and exceeds a preset value, the falling of the child is judged;
the crawling state judging process in the fifth step specifically comprises the following steps:
measuring the angle difference between the Z line and the reference surface, and judging that the child is crawling when the included angle between the Z line and the reference surface is within a preset angle range and the Z line horizontally moves on the reference surface;
the specific judging process of the hand biting in the fifth step is as follows:
a: extracting a K1 line and an M1 line, and measuring the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U in real time;
b: marking the distance between the point A2 end of the K1 line and the point B2 end of the M1 line and the mouth line U as Rt;
c: setting a judgment threshold Xt, calculating the difference between the distance Rt of the lip hand and the judgment threshold Et to obtain Rx Difference of difference
d: when Rx is Difference of difference When the absolute value of the (a) is smaller than the preset value, the child behavior is judged to be biting.
2. A child behavior feature-based behavior action recognition method according to claim 1, wherein: the prompting terminal comprises a smart phone and a smart tablet computer.
CN202010670122.7A 2020-07-13 2020-07-13 Behavior action recognition method based on child behavior characteristics Active CN111814700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010670122.7A CN111814700B (en) 2020-07-13 2020-07-13 Behavior action recognition method based on child behavior characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010670122.7A CN111814700B (en) 2020-07-13 2020-07-13 Behavior action recognition method based on child behavior characteristics

Publications (2)

Publication Number Publication Date
CN111814700A CN111814700A (en) 2020-10-23
CN111814700B true CN111814700B (en) 2023-09-26

Family

ID=72843159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010670122.7A Active CN111814700B (en) 2020-07-13 2020-07-13 Behavior action recognition method based on child behavior characteristics

Country Status (1)

Country Link
CN (1) CN111814700B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580514B (en) * 2020-12-21 2023-07-25 西南交通大学 Baby crawling state identification method and system
CN113887388B (en) * 2021-09-29 2022-09-02 云南特可科技有限公司 Dynamic target recognition and human body behavior analysis system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110104195A (en) * 2010-03-16 2011-09-22 주식회사 유니온커뮤니티 Motion monitoring apparatus for elevator security and method thereof
CN109993063A (en) * 2019-03-05 2019-07-09 福建天晴数码有限公司 A kind of method and terminal identified to rescue personnel
CN110084196A (en) * 2019-04-26 2019-08-02 湖南科技学院 A kind of monitor video identifying system for cloud computing
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
TW202010556A (en) * 2018-09-06 2020-03-16 宏碁股份有限公司 Smart strap and method for defining human posture
CN111145533A (en) * 2019-12-24 2020-05-12 安徽虹湾信息技术有限公司 Pedestrian abnormal traffic behavior pattern recognition management and control system based on urban area
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3547211B1 (en) * 2018-03-30 2021-11-17 Naver Corporation Methods for training a cnn and classifying an action performed by a subject in an inputted video using said cnn

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110104195A (en) * 2010-03-16 2011-09-22 주식회사 유니온커뮤니티 Motion monitoring apparatus for elevator security and method thereof
TW202010556A (en) * 2018-09-06 2020-03-16 宏碁股份有限公司 Smart strap and method for defining human posture
CN109993063A (en) * 2019-03-05 2019-07-09 福建天晴数码有限公司 A kind of method and terminal identified to rescue personnel
CN110084196A (en) * 2019-04-26 2019-08-02 湖南科技学院 A kind of monitor video identifying system for cloud computing
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
CN111145533A (en) * 2019-12-24 2020-05-12 安徽虹湾信息技术有限公司 Pedestrian abnormal traffic behavior pattern recognition management and control system based on urban area
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cognitive vision, its disorders and differential diagnosis in adults and children: knowing where and what things are;Dutton, GN;《EYE》;第17卷(第3期);全文 *
机器人远程体感控制技术研究;孙鑫;吴思进;;电子世界(第12期);全文 *
特殊行业中人员高危行为视觉监控系统仿真;赖蘋华;《科技通报》;第30卷(第12期);全文 *

Also Published As

Publication number Publication date
CN111814700A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
CN103839040B (en) Gesture identification method and device based on depth image
CN111814700B (en) Behavior action recognition method based on child behavior characteristics
Han et al. Comparative study of motion features for similarity-based modeling and classification of unsafe actions in construction
CN109815907B (en) Sit-up posture detection and guidance method based on computer vision technology
CN110045823B (en) Motion guidance method and device based on motion capture
CN110490080B (en) Human body falling judgment method based on image
Wang et al. Human posture recognition based on images captured by the kinect sensor
CN110170159A (en) A kind of human health's action movement monitoring system
US9117138B2 (en) Method and apparatus for object positioning by using depth images
Ahmed et al. Gait recognition based on Kinect sensor
Xu et al. Elders’ fall detection based on biomechanical features using depth camera
JP2000251078A (en) Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person
CN109886137A (en) Infant sleeping posture detection method, device and computer readable storage medium
CN112115827A (en) Falling behavior identification method based on human body posture dynamic characteristics
CN111460978A (en) Infant behavior monitoring system based on motion judgment sensor and deep learning technology and judgment method thereof
CN111709365A (en) Automatic human motion posture detection method based on convolutional neural network
TWI652039B (en) Simple detection method and system for sarcopenia
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
Yang et al. Human exercise posture analysis based on pose estimation
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
TWI664550B (en) Golf player swing posture detection system
CN106737544A (en) Searching machine people based on various biosensors and 3D cameras
Bansal et al. Elderly people fall detection system using skeleton tracking and recognition
Maldonado et al. Feature selection to detect fallen pose using depth images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230825

Address after: Workstation 10-401-047, Creative Industry Park, 328 Xinghu Street, Suzhou Industrial Park, Suzhou Area, China (Jiangsu) Pilot Free Trade Zone, Suzhou City, Jiangsu Province, 215101 (Cluster Registration)

Applicant after: Suzhou Yuelin Information Technology Co.,Ltd.

Address before: Room 804, 8 / F, block J2C, innovation industrial park, 2800 innovation Avenue, high tech Zone, Hefei City, Anhui Province 230000

Applicant before: Anhui Lanchen Information Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant