CN116229502A - Image-based tumbling behavior identification method and equipment - Google Patents

Image-based tumbling behavior identification method and equipment Download PDF

Info

Publication number
CN116229502A
CN116229502A CN202211666081.XA CN202211666081A CN116229502A CN 116229502 A CN116229502 A CN 116229502A CN 202211666081 A CN202211666081 A CN 202211666081A CN 116229502 A CN116229502 A CN 116229502A
Authority
CN
China
Prior art keywords
human body
image
value
identified
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666081.XA
Other languages
Chinese (zh)
Inventor
钟浩
熊超
牛昕宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Corerain Technologies Co Ltd
Original Assignee
Shenzhen Corerain Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Corerain Technologies Co Ltd filed Critical Shenzhen Corerain Technologies Co Ltd
Priority to CN202211666081.XA priority Critical patent/CN116229502A/en
Publication of CN116229502A publication Critical patent/CN116229502A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a tumbling behavior identification method and equipment based on images. The method comprises the following steps: acquiring human skeleton points, human outlines and/or the definition of the image to be identified, detecting whether a complete human image with the definition larger than a preset value exists in the image to be identified according to the human skeleton points, the human outlines and/or the definition of the image to be identified, and if so, identifying whether the human body in the image to be identified is in a falling state according to the width of the human image, the height of the human image and/or the position coordinates of the human skeleton points. According to the method and the device, the situation of misidentification caused by identification of the image can be avoided when the human body in the image has the conditions of shielding, low image definition and the like, and the identification accuracy of the falling behavior is improved.

Description

Image-based tumbling behavior identification method and equipment
Technical Field
The application relates to the technical field of image recognition, in particular to a tumbling behavior recognition method and equipment based on images.
Background
The identification of the falling behavior of the personnel is an important technology applied to the safety supervision field, and the technology can analyze whether the falling abnormal behavior exists in the video picture or not in real time under the monitoring scene, and the analysis of the falling behavior existence condition can be timely fed back to related management personnel for processing.
However, in an actual monitoring scene, there is often uncertainty in the direction of a person falling, and the direction may fall to any angle, in which case, in the prior art, the falling state is judged by calculating the positional relationship between body parts, and it is difficult to ensure the recognition accuracy. In addition, the monitored scene is often complex and changeable, and the scene generally has factors such as shielding among people, shielding of objects from people, shielding of people, illumination influencing the imaging quality of video pictures, and the like, which can cause the reduction of the final recognition accuracy.
Disclosure of Invention
In view of the above, the present application provides an image-based method and apparatus for identifying a falling behavior, which aim to improve the accuracy of identifying a falling behavior.
In a first aspect, the present application provides an image-based fall behavior recognition method, the method comprising:
acquiring human skeleton points, human outlines and/or definition of an image to be identified;
judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified;
If so, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points.
In a second aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the image-based tumbling behavior identification method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the human skeleton point and the human contour in the image to be identified and/or the definition of the image to be identified, whether the human image with the complete definition larger than the preset value exists in the image to be identified or not is detected according to the human skeleton point and the human contour and/or the definition of the image to be identified, and when the human image with the complete definition larger than the preset value exists in the image to be identified, whether the human body in the image to be identified is in a falling state or not is identified, so that the situation that the image is mistakenly identified when the human body in the image is shielded or the image definition is lower or the like can be avoided. Because the actual scene generally has factors such as shielding between people, shielding of objects to people, shielding of people themselves, lower image definition and the like, and because the width of the human body image is larger than the height and the condition of human body skeleton points when the height is larger than the width is different, when the application detects that the human body image which is to be identified and has complete definition larger than a preset value exists in the image to be identified, the identification accuracy of the falling behavior can be improved according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points, whether the human body in the image to be identified is in a falling state is identified, and the width of the human body image and the height of the human body image are taken as consideration factors of whether the human body is in the falling state or not.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart diagram of a preferred embodiment of the image-based fall behavior recognition method of the present application;
FIG. 2 is a schematic diagram of a human skeleton point in an embodiment of the present application;
FIG. 3 is a schematic illustration of a human body contour in an embodiment of the present application;
FIG. 4 is a schematic view of a human body image with a width greater than a height according to an embodiment of the present application;
FIG. 5 is a schematic view of a human body image of an embodiment of the present application having a width less than or equal to a height;
FIG. 6 is a schematic block diagram of a preferred embodiment of the image-based fall behavior recognition device of the present application;
FIG. 7 is a schematic diagram of a preferred embodiment of the electronic device of the present application;
the realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the description herein of "first," "second," etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be regarded as not exist and not within the protection scope of the present application.
The application provides a tumbling behavior identification method based on images. Referring to fig. 1, a method flow diagram of an embodiment of an image-based fall behavior recognition method according to the present application is shown. The method may be performed by an electronic device, which may be implemented in software and/or hardware. The image-based fall behavior recognition method comprises the following steps:
step S10: acquiring human skeleton points, human outlines and/or definition of an image to be identified;
step S20: judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified;
step S30: if so, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points.
In this embodiment, the image to be identified may be an image captured by a monitoring camera in real time in a monitored scene, for example, a video stream is obtained from the monitoring camera, and the video stream is decoded to obtain a video frame image as the image to be identified. The present solution is described by taking an image to be identified as an image captured by a camera in real time as an example, and it can be understood that the actual application scenario of the present solution is not limited thereto, and the image to be identified may be a static image captured in non-real time, or may be an image stored in a database in advance, etc. It should be noted that, if no human body image exists in the image to be identified, that is, when the human body skeleton points and the human body contours cannot be obtained from the image to be identified, the next frame image corresponding to the image to be identified can be used as the image to be identified, and whether the image to be identified exists or not is judged, if so, the human body skeleton points, the human body contours and/or the definition of the image to be identified in the image to be identified are obtained.
Human body skeleton points in the image to be identified can be obtained by using a human body posture estimation algorithm (for example, an OpenPose algorithm), the human body contour in the image to be identified can be obtained after the image to be identified is subjected to gray processing, and the definition of the image to be identified can be obtained by using a Laplacian gradient method.
Because of complexity of the monitoring scene, there are usually factors such as shielding between people, shielding of objects from people, shielding of people themselves, variation of illumination, poor imaging quality of video images, etc. in the monitoring scene, which affect the final recognition accuracy, therefore, it is required to detect whether there is a human body image with a complete definition greater than a preset value in the image to be recognized according to human body skeleton points, human body contours and/or definition of the image to be recognized, if the human body skeleton points are incomplete, or the human body contours are incomplete, or the definition of the image to be recognized is less than or equal to the preset value (e.g., 0.5), then there is no complete and definition greater than the preset value in the image to be recognized, and if the human body skeleton points are complete and the human body contours are complete, and the definition of the image to be recognized is greater than the preset value, then there is a complete and definition greater than the preset value in the image to be recognized. The preset value can be modified in the JSON configuration file, modification in a code is not needed, and real-time adjustment in an actual application scene is facilitated.
Referring to fig. 2, a schematic diagram of a human skeleton point in this embodiment of the present application is shown, where the human skeleton point includes 17 pieces of key point position information of human skeleton, namely, a nose (0 point), a left eye (1 point), a right eye (2 points), a left ear (3 points), a right ear (4 points), a left shoulder (5 points), a right shoulder (6 points), a left elbow (7 points), a right elbow (8 points), a left wrist (9 points), a right wrist (10 points), a left hip (11 points), a right hip (12 points), a left knee (13 points), a right knee (14 points), a left ankle (15 points), and a right ankle (16 points). Referring to fig. 3, a schematic diagram of a human body contour in an embodiment of the present application is shown, where the human body contour is semantic segmentation information of three independent parts of a head, an upper body, and a lower body of a person. The sharpness of the image to be identified includes the imaging quality of the human body in the image (e.g., whether it is blurred, dark, clearly discernable to the naked eye, etc.). When detecting that no complete human body image with the definition larger than the preset value exists in the image to be identified, the image to be identified can be filtered, and the next frame of image to be identified is used as the image to be identified to continue detection. The method has the advantages that the images which are not complete are filtered, the images with the definition smaller than or equal to the preset value are filtered, only the images which are clear and can see the complete human body are reserved, and the recognition rate of the falling behaviors of the personnel can be effectively improved.
For example, when the skeleton points of the human body have 17 key points shown in fig. 2, and the human body contour is specific to semantic segmentation information of three independent parts of the head, the upper body and the lower body of the human body, and the definition of the image to be recognized is greater than a preset value, the human body image which is complete and has definition greater than the preset value is considered to exist in the image to be recognized. If the image to be identified is only the upper body part containing the human body, the human body image in the image to be identified is an incomplete human body image.
Since in an actual monitoring scene, there are often multiple angles of falling, as shown in fig. 4, a schematic diagram of a human body image with a width larger than a height in the embodiment of the present application is shown. Fig. 5 is a schematic diagram of a human body image with a width smaller than or equal to a height according to an embodiment of the present application. When detecting that a complete human body image with the definition larger than a preset value exists in the image to be identified, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of human body skeleton points. For example, in the case where the width of the human body image is larger than the height of the human body image (the case shown in fig. 4), whether the human body in the image to be recognized is in a fallen state is recognized based on the human body skeleton point position coordinates, and in the case where the width of the human body image is smaller than or equal to the height of the human body image (the case shown in fig. 5), whether the human body is in a fallen state is recognized using the deep learning model. According to the relation between the width and the height of the human body image, different identification methods are selected, so that the identification accuracy can be improved, and the false identification rate is reduced.
Specifically, the identifying whether the human body in the image to be identified is in a falling state according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points includes:
judging whether the width of the human body image is larger than the height of the human body image;
if the width of the human body image is larger than the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton points;
if the width of the human body image is smaller than or equal to the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to a pre-trained state detection model.
When the width of the human body image is larger than the height of the human body image, the position coordinates of the human body skeleton points corresponding to the standing state and the falling state of the human body are different, so that whether the human body in the human body image is in the falling state or not can be identified according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton points. Because the width of the human body image is smaller than or equal to the height of the human body image, the position coordinates of human body skeleton points corresponding to the standing state and the falling state of the human body are very close, so that whether the human body is in the falling state cannot be judged by utilizing the position coordinates of the human body skeleton points, whether the human body in the human body image is in the falling state or not can be judged by utilizing a pre-trained state detection model, the state detection model is a classified model, the state detection model can be obtained by adopting a resnet18 network training, the image of the human body in the falling state is used as a positive sample and marked during the training, the positive sample comprises falling state data of the human body in various angle directions, the image of the human body in the non-falling state is used as a negative sample and marked, and the negative sample comprises non-falling state data such as squatting, sitting and normal standing. The human body image is detected by the two-classification state detection model to obtain the score probability of the two state categories, and whether the human body in the human body image is in a falling state or not is judged according to the score probability of the falling state, for example, the score of the falling state is larger than 0.6, and whether the human body in the human body image is in the falling state or not is judged.
Since in an actual scene, the falling state of the human body usually lasts for a period of time, further, after whether the human body in the image to be identified is in the falling state or not is identified, each frame of image in a period of time corresponding to the image to be identified can be identified, whether the human body is in the falling state or not is judged according to the identification result of the multi-frame image in a period of time, for example, 12 frames of images in the next second corresponding to the image to be identified are respectively taken as the image to be identified, the 12 frames of images are sequentially identified, and when 10 frames of images in the 12 frames of images all identify that the human body is in the falling state, the human body is judged to be in the falling state, so that the identification accuracy of the falling state of the human body is improved.
According to the human skeleton point and the human contour in the image to be identified and/or the definition of the image to be identified, whether the human image with the complete definition larger than the preset value exists in the image to be identified or not is detected according to the human skeleton point and the human contour and/or the definition of the image to be identified, and when the human image with the complete definition larger than the preset value exists in the image to be identified, whether the human body in the image to be identified is in a falling state or not is identified, so that the situation that the image is mistakenly identified when the human body in the image is shielded or the image definition is lower or the like can be avoided. Because the actual scene generally has factors such as shielding between people, shielding of objects to people, shielding of people themselves, lower image definition and the like, and because the width of the human body image is larger than the height, the human body image is different from the human body skeleton point condition when the height is larger than the width, when the application detects that the human body image which is to be recognized and has complete definition larger than a preset value exists in the image to be recognized, the human body in the image to be recognized is recognized to be in a falling state according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton point, and the width of the human body image and the height of the human body image are taken as consideration factors of whether the human body is in the falling state or not, so that the recognition accuracy can be improved.
In one embodiment, the acquiring the human skeleton points, the human contours and/or the sharpness of the image to be identified includes:
detecting whether human body information exists in the image to be identified or not by utilizing a pre-constructed human body detection model;
and if the human body information exists in the image to be identified, acquiring human body skeleton points, human body contours and/or the definition of the image to be identified in the image to be identified according to a pre-trained fusion model.
The human body detection model can be obtained based on the yolov5-s lightweight model training, the calculation amount of the system can be reduced by training the human body detection model through the lightweight model, the running speed of the model when deployed on the edge equipment is ensured, and therefore the actual identification requirement is met. The human body detection model is used for detecting whether a human body exists in the image to be identified, and if no human body information exists in the image to be identified, the next frame of image of the image to be identified is used as the image to be identified to continue detection. If the human body information exists in the image to be identified, obtaining human body skeleton points, human body contours and/or definition of the image to be identified according to a pre-trained fusion model.
Further, the obtaining the human skeleton points, the human outlines and/or the definition of the image to be identified according to the pre-trained fusion model includes:
utilizing a human skeleton point detection network in the fusion model to obtain human skeleton points in the image to be identified;
obtaining the human body contour in the image to be identified by utilizing a human body contour detection network in the fusion model;
and obtaining the definition of the image to be identified by using an image definition detection network in the fusion model.
The pre-trained fusion model is fused with three branch networks, namely a human skeleton point detection network for outputting human skeleton point information, a human contour detection network for outputting human contour information and an image definition detection network for outputting the definition of an image to be identified. A High-Resolution Net (HRNet) is adopted to construct a branch network for outputting human skeleton point information and human contour information, and the HRNet network has wide application in image segmentation tasks and human skeleton point detection tasks. And constructing a 10-layer convolutional neural network structure as a branch network for outputting human body image quality information, and fusing model parameters of the three branch networks to obtain a fused model for outputting three kinds of information. Because the number of layers of the original HRNet network structure is too large, in the process of training the fusion model, the HRNet network can lead to the reduction of the overall recognition speed of the model, so that the model compression technology (network channel pruning technology) can be adopted to cut the original HRNet network structure, the parameter number and the model size of the model are reduced, the calculated amount of the model is reduced, the running speed of the model when deployed on edge equipment is ensured, and the actual recognition requirement is met. In the training process, as the three branch networks cannot be trained simultaneously, the three branch networks are used for training in a sequential training mode, the human skeleton point detection network can be trained independently, parameters of the human skeleton point detection network are frozen after the human skeleton point detection network is trained, the human contour detection network is trained independently, the two branch model parameters trained after the human contour detection network is trained are frozen, the image definition detection network is trained independently finally, and the fusion model of the three complete branch networks is obtained after the training is completed. After training to obtain a fusion model, inputting the image to be identified into the fusion model, and obtaining the human skeleton points, the human outlines and/or the definition of the image to be identified.
Further, the determining whether the human body image with the complete definition greater than the preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified includes:
judging whether human skeleton points in the image to be identified are complete or not;
if yes, judging whether the human body outline in the image to be identified is complete;
if yes, judging whether the definition of the image to be identified is larger than a preset value;
if yes, the human body image with the definition larger than the preset value exists in the image to be identified.
Judging whether human skeleton points in the image to be identified are complete or not, judging whether the human body is blocked or not by confidence scores of a plurality of key points in the human skeleton points, and judging whether the human body is blocked or not by using 6 points of a left shoulder, a right shoulder, a left hip, a right hip, a left knee and a right knee in the human skeleton points as judgment bases, wherein when the confidence score of one key point is lower than a preset value (for example, 0.5), the situation that the current human body is blocked possibly is indicated, namely, the human skeleton points in the image to be identified are incomplete.
Whether the human body contour in the image to be identified is complete or not is judged, whether the human body contour in the image is complete or not can be judged by judging whether contour information of three parts of a human head, an upper body and a lower body can be obtained by completely dividing the image to be identified or not, and whether the human body contour in the image is complete or not is judged according to the area covered by the contour.
The purpose of judging the definition of the image to be identified is to detect whether the human body in the image is blurred or not and the situation that the human body cannot be distinguished by naked eyes is detected, and the branch network divides the definition of the human body in the image into three categories of normal, blurred and over-black in the training process, so that the judgment is carried out by utilizing whether the score probability of the normal category is larger than the score probability of a preset value (for example, 0.5) or not when judging, if the score probability of the normal category is larger than the preset value, the human body in the image is clear, otherwise, the human body in the current image is blurred or over-black. When the human skeleton points in the image to be identified are complete, the human body contours are complete, and whether the definition of the image to be identified is larger than a preset value or not, the human body images with the complete definition larger than the preset value exist in the image to be identified.
Further, the identifying whether the human body in the image to be identified is in a falling state according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton points comprises:
calculating a first ordinate value of a center point between a left hip point and a right hip point of the human body and calculating a second ordinate value of the center point between a left knee point and a right knee point of the human body by using the position coordinates of the human body skeleton points;
Calculating the height and width of the upper body of the human body according to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body;
respectively calculating a first distance from the head of the human body to the ankle of the human body in the abscissa direction and a second distance from the head of the human body to the ankle of the human body in the ordinate direction by utilizing the position coordinates of the human body skeleton points;
and identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the height of the upper half body and the width of the upper half body, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first longitudinal coordinate value and the second longitudinal coordinate value and the magnitude relation between the first distance and the second distance.
Assume that human skeleton points are represented by the following symbols: left shoulder point p [ lshoulder ], right shoulder point p [ rshoulder ], left hip point p [ lhip ], right hip point p [ rhip ], left ankle point p [ lankle ], right ankle point p [ rankle ], left knee point p [ lknee ], right knee point p [ rknee ], nose point p [ lose ], "p [ ]. X ], p [ ]. Y ] respectively denote an abscissa value and an ordinate value of the point in a coordinate system with a point of the upper left corner of the image to be recognized as an origin, for example, p [ lshoulder ]. X denotes an abscissa value of the left shoulder point, p [ lshoulder ]. Y denotes an ordinate value of the left shoulder point, p [ rshoulder ]. X denotes an abscissa value of the right shoulder point, and p [ houlder ]. Y denotes an ordinate value of the right shoulder point.
Calculating a first ordinate value of a center point between a left hip point and a right hip point of the human body by using position coordinates of skeleton points of the human body, wherein the first ordinate value of the center point between the left hip point and the right hip point is denoted as hip_c_y, wherein hip_c_y= (p [ lhip ]. Y+p [ rhip ]. Y)/2;
the second ordinate value of the center point between the left Knee point and the right Knee point of the human body is denoted as knee_c_y, where knee_c_y= (p [ lknee ]. Y+p [ rknee ]. Y)/2.
According to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body, the upper body height and the upper body width of the human body can be calculated, for example, according to the coordinate information of the left shoulder point and the right shoulder point, the distance between the left shoulder point and the right shoulder point is calculated as the upper body width of the human body, and the distance between the shoulders and the hips of the human body is calculated as the upper body height of the human body;
the first distance from the head of the human body to the horizontal coordinate direction and the second distance from the head of the human body to the vertical coordinate direction can be calculated by calculating the coordinate information of the center point between the left ankle point and the right ankle point of the human body and acquiring the coordinate information of the nose point of the human body;
And identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the height of the upper half body and the width of the upper half body, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first longitudinal coordinate value and the second longitudinal coordinate value and the magnitude relation between the first distance and the second distance.
The identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the height of the upper half body and the width of the upper half body, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first ordinate value and the second ordinate value, and the magnitude relation between the first distance and the second distance comprises:
if the upper body height is greater than the upper body width, the first longitudinal coordinate value is greater than or equal to the second longitudinal coordinate value, the first distance is smaller than the second distance, and the ratio is greater than the preset threshold, the human body in the human body image is in a falling state;
otherwise, the human body in the human body image is in a non-falling state.
The preset threshold is a value greater than 1, and when the preset threshold is greater than 1, it is indicated that the width of the human body image is greater than the height of the human body image. If the upper body height is smaller than or equal to the upper body width, or the first ordinate value is smaller than the second ordinate value, or the first distance is greater than or equal to the second distance, the human body in the human body image is in a non-falling state. By combining the height and width of the upper body of the human body, the distance from the head of the human body to the ankle of the human body in the abscissa and the ordinate, the first ordinate of the center point between the left and right hip points and the second ordinate of the center point between the left and right knees, the ratio of the width of the human body image to the height of the human body image is comprehensively judged, so that whether the human body in the human body image is in a falling state can be more accurately judged.
Further, the calculating the upper body height and the upper body width of the human body according to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body includes:
calculating a first difference value of the abscissa value of the left shoulder point and the abscissa value of the right shoulder point, and calculating a second difference value of the abscissa value of the left hip point and the abscissa value of the right hip point;
Calculating a third difference value between the ordinate value of the left hip point and the ordinate value of the left shoulder point, and calculating a fourth difference value between the ordinate value of the right hip point and the ordinate value of the right shoulder point;
calculating a first average value of the absolute value of the first difference value and the absolute value of the second difference value, and taking the first average value as the upper body width;
and calculating a second average value of the absolute value of the third difference value and the absolute value of the fourth difference value, and taking the second average value as the upper body height.
p [ lshoulder ]. X represents an abscissa value of the left shoulder point, p [ lshoulder ]. Y represents an ordinate value of the left shoulder point, p [ rshoulder ]. X represents an abscissa value of the right shoulder point, p [ rshoulder ]. Y represents an ordinate value of the right shoulder point, p [ lhip ]. X represents an abscissa value of the left hip point, p [ lhip ]. Y represents an ordinate value of the left hip point, p [ rhip ]. X represents an abscissa value of the right hip point, and p [ rhip ]. Y represents an ordinate value of the right hip point.
First difference = p [ lshoulder ]. X-p [ rshoulder ]. X;
second difference = p [ lhip ]. X-p [ rhip ]. X;
third difference = p [ lhip ]. Y-p [ lshoulder ]. Y;
fourth difference = p [ rhip ]. Y-p [ rshoulder ]. Y;
width of upper body of human body:
u_w=(abs(p[lshoulder].x-p[rshoulder].x)+abs(p[lhip].x-p[rhip].x))/2;
Height of upper body of human body:
u_h= (abs (p [ lhip ]. Y-p [ lshoulder ]. Y) +abs (p [ rhip ]. Y-p [ rshoulder ]. Y))/2. Where abs denotes a sign for absolute value.
Further, the calculating, by using the position coordinates of the skeleton points of the human body, the first distance from the head of the human body to the abscissa direction and the second distance from the head of the human body to the ordinate direction respectively includes:
calculating a third abscissa value and a third ordinate value of a center point between a left ankle point and a right ankle point of the human body by using the position coordinates of the human body skeleton point;
acquiring a fourth abscissa value and a fourth ordinate value of a nose point of the human body from the position coordinates of the human body skeleton point;
taking an absolute value of a value obtained by subtracting the third abscissa value from the fourth abscissa value as the first distance;
and subtracting the third ordinate value from the fourth ordinate value to obtain an absolute value, and taking the absolute value as the second distance.
The abscissa value of the center point between the left ankle point and the right ankle point is noted as a third abscissa value ankle_c_x= (p [ lnakle ]. X+p [ rankle ]. X)/2, the ordinate value is noted as a third ordinate value ankle_c_y, ankle_c_y= (p [ lankle ]. Y+p [ rankle ]. Y)/2, the abscissa value of the nose point of the human body can be directly obtained from the position coordinates of the skeleton point of the human body is noted as a fourth abscissa value p [ not ]. X, the ordinate value of the nose point is noted as a fourth ordinate value p [ not ]. Y, the value obtained by subtracting the third abscissa value from the fourth abscissa value is taken as an absolute value, and the distance in the abscissa direction of the head of the human body to the ankle of the human body is noted as a first distance head_ankle_dis_x, namely head_ankle_x=s (p [ not ]. X_k_k_x=s (p [ not ]. X_k);
The absolute value is obtained by subtracting the third ordinate value from the fourth ordinate value, and the distance in the ordinate direction from the head of the human body to the ankle of the human body is referred to as a second distance head_ankle_dis_y, that is, head_ankle_dis_y=abs (p [ phase ]. Y-ankle_c_y), where abs represents a symbol for obtaining the absolute value.
Referring to fig. 6, a functional block diagram of the image-based fall behavior recognition device 100 according to the present application is shown.
The image-based fall behavior recognition apparatus 100 described herein may be installed in an electronic device. Depending on the functions implemented, the image-based fall behavior recognition device 100 may include an acquisition module 110, a detection module 120, and a recognition module 130. The modules described herein, which may also be referred to as units, refer to a series of computer program segments, which can be executed by a processor of an electronic device and perform fixed functions, are stored in a memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the acquisition module 110: the method comprises the steps of acquiring human skeleton points, human outlines and/or definition of an image to be identified;
the detection module 120: the human body image identification device is used for judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body outline and/or the definition of the image to be identified;
The identifying module 130 is configured to identify whether a human body in the image to be identified is in a falling state according to a width of the human body image, a height of the human body image and/or a position coordinate of the human body skeleton point if the human body image to be identified has a complete human body image with a definition greater than a preset value.
In one embodiment, the acquiring the human skeleton points, the human contours and/or the sharpness of the image to be identified includes:
detecting whether human body information exists in the image to be identified or not by utilizing a pre-constructed human body detection model;
and if the human body information exists in the image to be identified, acquiring human body skeleton points, human body contours and/or the definition of the image to be identified in the image to be identified according to a pre-trained fusion model.
In one embodiment, the obtaining the human skeleton point, the human contour and/or the sharpness of the image to be identified according to the pre-trained fusion model includes:
utilizing a human skeleton point detection network in the fusion model to obtain human skeleton points in the image to be identified;
obtaining the human body contour in the image to be identified by utilizing a human body contour detection network in the fusion model;
And obtaining the definition of the image to be identified by using an image definition detection network in the fusion model.
In one embodiment, the determining whether a human body image with a complete definition greater than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified includes:
judging whether human skeleton points in the image to be identified are complete or not;
if yes, judging whether the human body outline in the image to be identified is complete;
if yes, judging whether the definition of the image to be identified is larger than a preset value;
if yes, the human body image with the definition larger than the preset value exists in the image to be identified.
In one embodiment, the identifying whether the human body in the image to be identified is in a falling state according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton point includes:
judging whether the width of the human body image is larger than the height of the human body image;
if the width of the human body image is larger than the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton points;
If the width of the human body image is smaller than or equal to the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to a pre-trained state detection model.
In one embodiment, the identifying whether the human body in the image to be identified is in a falling state according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton point includes:
calculating a first ordinate value of a center point between a left hip point and a right hip point of the human body and calculating a second ordinate value of the center point between a left knee point and a right knee point of the human body by using the position coordinates of the human body skeleton points;
calculating the height and width of the upper body of the human body according to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body;
respectively calculating a first distance from the head of the human body to the ankle of the human body in the abscissa direction and a second distance from the head of the human body to the ankle of the human body in the ordinate direction by utilizing the position coordinates of the human body skeleton points;
and identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the height of the upper half body and the width of the upper half body, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first longitudinal coordinate value and the second longitudinal coordinate value and the magnitude relation between the first distance and the second distance.
In one embodiment, the calculating the upper body height and the upper body width of the human body according to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body includes:
calculating a first difference value of the abscissa value of the left shoulder point and the abscissa value of the right shoulder point, and calculating a second difference value of the abscissa value of the left hip point and the abscissa value of the right hip point;
calculating a third difference value between the ordinate value of the left hip point and the ordinate value of the left shoulder point, and calculating a fourth difference value between the ordinate value of the right hip point and the ordinate value of the right shoulder point;
calculating a first average value of the absolute value of the first difference value and the absolute value of the second difference value, and taking the first average value as the upper body width;
and calculating a second average value of the absolute value of the third difference value and the absolute value of the fourth difference value, and taking the second average value as the upper body height.
In one embodiment, the calculating, using the position coordinates of the skeleton points of the human body, a first distance from the head of the human body to an abscissa direction and a second distance from the head of the human body to an ordinate direction of an ankle of the human body includes:
Calculating a third abscissa value and a third ordinate value of a center point between a left ankle point and a right ankle point of the human body by using the position coordinates of the human body skeleton point;
acquiring a fourth abscissa value and a fourth ordinate value of a nose point of the human body from the position coordinates of the human body skeleton point;
taking an absolute value of a value obtained by subtracting the third abscissa value from the fourth abscissa value as the first distance;
and subtracting the third ordinate value from the fourth ordinate value to obtain an absolute value, and taking the absolute value as the second distance.
In one embodiment, the identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the upper body height and the upper body width, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first ordinate value and the second ordinate value, and the magnitude relation between the first distance and the second distance includes:
if the upper body height is greater than the upper body width, the first longitudinal coordinate value is greater than or equal to the second longitudinal coordinate value, the first distance is smaller than the second distance, and the ratio is greater than the preset threshold, the human body in the human body image is in a falling state;
Otherwise, the human body in the human body image is in a non-falling state.
Referring to fig. 7, a schematic diagram of a preferred embodiment of an electronic device 1 according to the present application is shown.
The electronic device 1 includes, but is not limited to: memory 11, processor 12, display 13, and communication interface 14. The electronic device 1 may be connected to a network via a communication interface 14. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Global System of Mobile communication, GSM), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, or a call network.
The memory 11 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 11 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic device 1, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are equipped in the electronic device 1. Of course, the memory 11 may also comprise both an internal memory unit of the electronic device 1 and an external memory device. In the present embodiment, the memory 11 is generally used to store an operating system and various types of computer programs installed in the electronic device 1, such as program codes of the image-based fall behavior recognition program 10. Further, the memory 11 may be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 12 is typically used for controlling the overall operation of the electronic device 1, e.g. performing data interaction or communication related control and processing, etc. In this embodiment, the processor 12 is configured to execute a program code or process data stored in the memory 11, for example, a program code or the like of the image-based fall behavior recognition program 10.
The display 13 may be referred to as a display screen or a display unit. The display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch device, or the like in some embodiments. The display 13 is used for displaying information processed in the electronic device 1 and for displaying a visual work interface.
The communication interface 14 may alternatively comprise a standard wired interface, a wireless interface, such as a WI-FI interface, which communication interface 14 is typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
Fig. 7 shows only the electronic device 1 with components 11-14 and the image-based fall behavior recognition program 10, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may alternatively be implemented.
In the above embodiment, the processor 12 may implement the following steps when executing the image-based fall behavior recognition program 10 stored in the memory 11:
acquiring human skeleton points, human outlines and/or definition of an image to be identified;
judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified;
if so, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points.
The storage device may be the memory 11 of the electronic device 1, or may be another storage device communicatively connected to the electronic device 1.
For a detailed description of the above steps, please refer to the functional block diagram of fig. 6 for an embodiment of the image-based fall behavior recognition device 100 and the flowchart of fig. 1 for an embodiment of the image-based fall behavior recognition method.
Furthermore, the embodiments of the present application also propose a computer-readable storage medium, which may be non-volatile or volatile. The computer readable storage medium may be any one or any combination of several of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disc read-only memory (CD-ROM), a USB memory, etc. The computer readable storage medium comprises a storage data area and a storage program area, wherein the storage program area stores an image-based tumbling action recognition program 10, and the image-based tumbling action recognition program 10 realizes the following operations when being executed by a processor:
Acquiring human skeleton points, human outlines and/or definition of an image to be identified;
judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified;
if so, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points.
The embodiment of the computer readable storage medium of the present application is substantially the same as the embodiment of the above-mentioned image-based fall behavior recognition method, and will not be described herein.
It should be noted that, the foregoing embodiment numbers are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above embodiment method may be implemented by means of software plus a necessary general hardware simulation platform, or may be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, an electronic device, or a network device, etc.) to perform the method described in the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. An image-based fall behavior recognition method, the method comprising:
Acquiring human skeleton points, human outlines and/or definition of an image to be identified;
judging whether a complete human body image with the definition larger than a preset value exists in the image to be identified according to the human body skeleton points, the human body contour and/or the definition of the image to be identified;
if so, identifying whether the human body in the image to be identified is in a falling state or not according to the width of the human body image, the height of the human body image and/or the position coordinates of the human body skeleton points.
2. The image-based fall behavior recognition method according to claim 1, wherein the acquiring of human skeleton points, human contours, and/or sharpness of the image to be recognized comprises:
detecting whether human body information exists in the image to be identified or not by utilizing a pre-constructed human body detection model;
and if the human body information exists in the image to be identified, acquiring human body skeleton points, human body contours and/or the definition of the image to be identified in the image to be identified according to a pre-trained fusion model.
3. The image-based fall behavior recognition method according to claim 2, wherein the obtaining the human skeleton points, the human contours and/or the sharpness of the image to be recognized according to the pre-trained fusion model comprises:
Utilizing a human skeleton point detection network in the fusion model to obtain human skeleton points in the image to be identified;
obtaining the human body contour in the image to be identified by utilizing a human body contour detection network in the fusion model;
and obtaining the definition of the image to be identified by using an image definition detection network in the fusion model.
4. The image-based tumbling behavior recognition method according to claim 1 or 3, wherein the judging whether a complete human body image with the definition larger than a preset value exists in the image to be recognized according to the human body skeleton points, the human body contour and/or the definition of the image to be recognized comprises:
judging whether human skeleton points in the image to be identified are complete or not;
if yes, judging whether the human body outline in the image to be identified is complete;
if yes, judging whether the definition of the image to be identified is larger than a preset value;
if yes, the human body image with the definition larger than the preset value exists in the image to be identified.
5. The image-based tumbling behavior recognition method according to claim 1, wherein the recognizing whether the human body in the image to be recognized is in a tumbling state according to the width of the human body image, the height of the human body image, and/or the position coordinates of the human body skeleton points comprises:
Judging whether the width of the human body image is larger than the height of the human body image;
if the width of the human body image is larger than the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to the ratio of the width of the human body image to the height of the human body image and the position coordinates of the human body skeleton points;
if the width of the human body image is smaller than or equal to the height of the human body image, identifying whether the human body in the image to be identified is in a falling state or not according to a pre-trained state detection model.
6. The image-based tumbling behavior recognition method according to claim 5, wherein the recognizing whether the human body in the image to be recognized is in a tumbling state according to a ratio of a width of the human body image to a height of the human body image and position coordinates of the human body skeleton points comprises:
calculating a first ordinate value of a center point between a left hip point and a right hip point of the human body and calculating a second ordinate value of the center point between a left knee point and a right knee point of the human body by using the position coordinates of the human body skeleton points;
calculating the height and width of the upper body of the human body according to the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body;
Respectively calculating a first distance from the head of the human body to the ankle of the human body in the abscissa direction and a second distance from the head of the human body to the ankle of the human body in the ordinate direction by utilizing the position coordinates of the human body skeleton points;
and identifying whether the human body in the image to be identified is in a falling state according to the magnitude relation between the height of the upper half body and the width of the upper half body, the magnitude relation between the ratio of the width of the human body image and the height of the human body image and a preset threshold value, the magnitude relation between the first longitudinal coordinate value and the second longitudinal coordinate value and the magnitude relation between the first distance and the second distance.
7. The image-based fall behavior recognition method according to claim 6, wherein the calculating the upper body height and the upper body width of the human body based on the coordinate information of the left shoulder point and the right shoulder point of the human body and the coordinate information of the left hip point and the right hip point of the human body comprises:
calculating a first difference value of the abscissa value of the left shoulder point and the abscissa value of the right shoulder point, and calculating a second difference value of the abscissa value of the left hip point and the abscissa value of the right hip point;
calculating a third difference value between the ordinate value of the left hip point and the ordinate value of the left shoulder point, and calculating a fourth difference value between the ordinate value of the right hip point and the ordinate value of the right shoulder point;
Calculating a first average value of the absolute value of the first difference value and the absolute value of the second difference value, and taking the first average value as the upper body width;
and calculating a second average value of the absolute value of the third difference value and the absolute value of the fourth difference value, and taking the second average value as the upper body height.
8. The image-based fall behavior recognition method according to claim 6, wherein the calculating of the first distance in the abscissa direction and the second distance in the ordinate direction of the head of the human body to the ankle of the human body using the position coordinates of the human body skeleton points, respectively, comprises:
calculating a third abscissa value and a third ordinate value of a center point between a left ankle point and a right ankle point of the human body by using the position coordinates of the human body skeleton point;
acquiring a fourth abscissa value and a fourth ordinate value of a nose point of the human body from the position coordinates of the human body skeleton point;
taking an absolute value of a value obtained by subtracting the third abscissa value from the fourth abscissa value as the first distance;
and subtracting the third ordinate value from the fourth ordinate value to obtain an absolute value, and taking the absolute value as the second distance.
9. The image-based tumbling behavior recognition method according to claim 6, wherein the recognizing whether the human body in the image to be recognized is a tumbling state based on a magnitude relation of the upper body height and the upper body width, a magnitude relation of a ratio of the width of the human body image to the height of the human body image to a preset threshold value, a magnitude relation of the first ordinate value and the second ordinate value, and a magnitude relation of the first distance and the second distance, comprises:
if the upper body height is greater than the upper body width, the first longitudinal coordinate value is greater than or equal to the second longitudinal coordinate value, the first distance is smaller than the second distance, and the ratio is greater than the preset threshold, the human body in the human body image is in a falling state;
otherwise, the human body in the human body image is in a non-falling state.
10. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the image-based fall behavior recognition method according to any one of claims 1 to 9 when executing a program stored on a memory.
CN202211666081.XA 2022-12-23 2022-12-23 Image-based tumbling behavior identification method and equipment Pending CN116229502A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666081.XA CN116229502A (en) 2022-12-23 2022-12-23 Image-based tumbling behavior identification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666081.XA CN116229502A (en) 2022-12-23 2022-12-23 Image-based tumbling behavior identification method and equipment

Publications (1)

Publication Number Publication Date
CN116229502A true CN116229502A (en) 2023-06-06

Family

ID=86581443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666081.XA Pending CN116229502A (en) 2022-12-23 2022-12-23 Image-based tumbling behavior identification method and equipment

Country Status (1)

Country Link
CN (1) CN116229502A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116553327A (en) * 2023-07-10 2023-08-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116553327A (en) * 2023-07-10 2023-08-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car
CN116553327B (en) * 2023-07-10 2023-09-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car

Similar Documents

Publication Publication Date Title
CN107358149B (en) Human body posture detection method and device
CN111428581B (en) Face shielding detection method and system
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
CN110210302B (en) Multi-target tracking method, device, computer equipment and storage medium
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
CN111753643B (en) Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN112364740B (en) Unmanned aerial vehicle room monitoring method and system based on computer vision
CN111339901B (en) Image-based intrusion detection method and device, electronic equipment and storage medium
CN111414812A (en) Human body attribute identification method, system, computer device and storage medium
CN110659588A (en) Passenger flow volume statistical method and device and computer readable storage medium
CN111275040A (en) Positioning method and device, electronic equipment and computer readable storage medium
CN113052107A (en) Method for detecting wearing condition of safety helmet, computer equipment and storage medium
CN112990057A (en) Human body posture recognition method and device and electronic equipment
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
CN116229502A (en) Image-based tumbling behavior identification method and equipment
CN113705294A (en) Image identification method and device based on artificial intelligence
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN110580708B (en) Rapid movement detection method and device and electronic equipment
CN116030500B (en) Personnel dressing standard identification method and system
CN113963311A (en) Safe production risk video monitoring method and system
CN112989958A (en) Helmet wearing identification method based on YOLOv4 and significance detection
CN112800923A (en) Human body image quality detection method and device, electronic equipment and storage medium
CN116758493A (en) Tunnel construction monitoring method and device based on image processing and readable storage medium
CN111027510A (en) Behavior detection method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination