CN113378692A - Method and detection system for reducing false detection of falling behavior - Google Patents

Method and detection system for reducing false detection of falling behavior Download PDF

Info

Publication number
CN113378692A
CN113378692A CN202110635639.7A CN202110635639A CN113378692A CN 113378692 A CN113378692 A CN 113378692A CN 202110635639 A CN202110635639 A CN 202110635639A CN 113378692 A CN113378692 A CN 113378692A
Authority
CN
China
Prior art keywords
detection
waist
limb point
head
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110635639.7A
Other languages
Chinese (zh)
Other versions
CN113378692B (en
Inventor
蔡冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Network Co Ltd
Original Assignee
Hangzhou Ezviz Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Network Co Ltd filed Critical Hangzhou Ezviz Network Co Ltd
Priority to CN202110635639.7A priority Critical patent/CN113378692B/en
Publication of CN113378692A publication Critical patent/CN113378692A/en
Application granted granted Critical
Publication of CN113378692B publication Critical patent/CN113378692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method for reducing false detection of falling behavior, which comprises the steps that on the side of detection equipment for carrying out posture identification based on images, waist limb point information and head limb point information in skeleton information of posture estimation used for obtaining a first detection result are utilized to determine the relative position relationship between a waist and a head; the first detection result is a detection result recognized as a falling behavior; and filtering the first detection result by utilizing the relative position relationship, and reserving the first detection result meeting the conditions, wherein the conditions are as follows: the head is not located above the waist with the ground level as a frame of reference. The method and the device avoid the false detection of postures similar to the falling behavior from standing to sitting, standing to squatting, bending and the like, and improve the accuracy of falling behavior detection.

Description

Method and detection system for reducing false detection of falling behavior
Technical Field
The invention relates to the field of posture detection, in particular to a method for reducing false detection of falling behaviors.
Background
With the development of technology, the intelligence of specific personnel care has become a hot spot. The intelligent monitoring device is very practical for the disabled such as the elderly living alone and the disabled (mentally disabled) people to monitor whether the disabled falls down in real time. For example, whether a situation of falling occurs in a scene is judged through a monitoring camera, and if the situation of falling occurs, an alarm prompt is output.
When the monitoring image is used for detecting the falling behavior, the behavior postures of standing to sitting, standing to squatting and the like are usually mistakenly detected as the falling behavior, and because the behavior postures are common behaviors in the household life, the mistakenly detected falling behavior causes frequent alarm, and the confidence coefficient of the falling detection alarm is reduced.
Disclosure of Invention
The invention provides a method for reducing false detection of falling behaviors, which is used for improving the reliability of falling behavior detection.
The method for reducing the false detection of the falling behavior is realized as follows:
on the side of the detection device that performs gesture recognition based on images,
determining a relative position relationship between the waist and the head according to waist limb point information and head limb point information in skeleton information of posture estimation used when a first detection result is obtained; the first detection result is a detection result recognized as a falling behavior;
filtering the first detection result by using the relative position relationship, reserving the first detection result meeting the condition,
wherein the conditions are: the head is not located above the waist with the ground level as a frame of reference.
Preferably, the head is not located above the waist, which is determined according to a first included angle between a first line segment connected to a global coordinate system defined by the waist limb point and the head limb point and a first direction perpendicular to the ground plane.
Preferably, the determining the relative position relationship between the waist and the head according to the waist limb point information and the head limb point information in the skeleton information of the posture estimation used when the first detection result is obtained includes:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates in a world coordinate system,
calculating a first included angle between a first line segment connected with the waist limb point and the head limb point and a first direction vertical to the ground plane according to the global coordinates of the waist limb point and the head limb point;
the filtering the first detection result by using the relative position relationship, and retaining the first detection result meeting the condition, includes:
and judging whether the first included angle is larger than a set first threshold value or not, and if so, keeping the first detection result.
Preferably, the number of the detection devices is n, and the projection range of the field of view acquired by each detection device on the ground plane is the same, where n is a natural number greater than or equal to 2,
the head is not located above the waist as determined by the fall direction determined by the pointing of a directed line segment under a global coordinate system determined from the waist limb point to the head limb point,
the filtering the first detection result by using the relative position relationship, and retaining the first detection result meeting the condition, includes:
when the directional line segment points to the detection equipment, the first detection result is reserved;
and when the directional line segment deviates from the detection equipment, removing the first detection result.
Preferably, the determining the relative position relationship between the waist and the head according to the waist limb point information and the head limb point information in the skeleton information of the posture estimation used when the first detection result is obtained includes:
on either detection device side:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates under a world coordinate system,
judging whether the projection of the directional line segment on the ground plane is positioned in the projection range of the visual field range of the detection equipment on the ground plane and points to the detection equipment or not according to the global coordinate of the waist limb point and the global coordinate of the head limb point, if so, judging that the falling direction faces to the detection equipment, otherwise, judging that the falling direction deviates from the detection equipment;
when the falling direction faces the detection device, a first included angle between a first line segment connected with the waist limb point and the head limb point and a first direction vertical to the ground plane is calculated according to the global coordinates of the waist limb point and the head limb point.
Preferably, the determining the relative position relationship between the waist and the head according to the waist limb point information and the head limb point information in the skeleton information of the posture estimation used when the first detection result is obtained includes:
on either detection device side:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates under a world coordinate system,
and calculating a first included angle between a directed line segment from the waist limb point to the head limb point and a first direction vertical to the ground plane according to the global coordinates of the waist limb point and the head limb point.
Preferably, when the directed line segment points to the detection device, the retaining the first detection result includes:
and judging whether the first included angle is larger than a set first threshold value or not, if so, retaining the first detection result, and sending the first detection result to the appointed equipment, so that the appointed equipment collects the first detection results from all the detection equipment.
Preferably, when the directed line segment points to the detection device, the retaining the first detection result includes:
judging whether the difference between the first included angle and the second included angle is larger than a set second threshold value or not, if so, retaining the first detection result, and sending the first detection result to the appointed equipment, so that the appointed equipment collects the first detection results from all the detection equipment;
the second included angle is an included angle between a directed line segment from the waist limb point to the head limb point in a non-falling state and the first direction perpendicular to the ground plane.
The invention also provides a detection apparatus comprising a memory storing a computer program and a processor configured to execute the computer program to carry out the steps of a method of reducing false fall detection as claimed in any one of the preceding claims.
The invention also provides a detection system which comprises n detection devices, wherein n is a natural number which is more than or equal to 2.
According to the method for reducing the false detection of the falling behavior, the first detection result which is identified as the falling behavior is filtered by utilizing the relative position relationship determined by the waist limb point information and the head limb point information, so that false detection of postures similar to the falling behavior from standing to sitting, standing to squatting, bending and the like is avoided, and the accuracy of falling behavior detection is improved; through linkage detection of the plurality of detection devices, each detection device is responsible for filtering the falling behavior towards the detection device according to the falling direction, and the detection results of the detection devices are combined, so that the false detection can be further reduced; the method for reducing the false detection of the falling behavior enables the falling behavior to be detected more accurately, and is beneficial to improving the confidence coefficient of falling detection alarm.
Drawings
Fig. 1 is a schematic flow chart of a method for reducing false fall detection according to the present application.
FIG. 2 is a schematic diagram of a target image and a position relationship of a detection device for gesture recognition based on the image according to an embodiment.
Fig. 3 is a schematic diagram of a fall behavior detection framework of an object person according to an embodiment.
Fig. 4 is a schematic flow chart of fall behavior detection of a target person.
Figure 5 is a schematic view of the positional relationship of the head and waist,
fig. 6 is a schematic diagram of a detection system composed of multiple detection devices in the two-multi-camera linkage detection method according to the embodiment.
Fig. 7 is a schematic diagram of a framework of fall behavior detection according to the second embodiment.
Fig. 8 is a schematic flow chart of fall behavior detection according to the second embodiment.
FIG. 9 is a schematic diagram of simultaneous detection by two cameras.
Fig. 10 is a schematic flow chart of the three-fall behavior detection according to this embodiment.
Fig. 11 is a schematic diagram of a directional line segment from the waist to the head in a non-falling state and a falling state in consideration of physiological bending of a human body.
Fig. 12 is a schematic flow chart of the four-fall detection in this embodiment.
Fig. 13 is a schematic diagram of a detection apparatus according to an embodiment of the present application.
FIG. 14 is another schematic view of a detection apparatus.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The applicant has found that in the posture from standing to sitting or from standing to squatting, the head is above the waist, and in the posture of falling, the head and the waist do not satisfy the relative positional relationship between the head and the waist. In view of this, the position relationship between the head and the waist in the image can be analyzed, fall behaviors satisfying the position relationship between the head and the upper waist and lower waist among fall behaviors detected by the conventional fall behavior detection method are filtered, fall behaviors not satisfying the position relationship between the head and the upper waist and lower waist are retained, and the fall behaviors are used as final detection results.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for reducing false fall detection according to the present application. On the side of a detection device for performing gesture recognition based on an image, the following steps are performed:
step 101, determining a relative position relationship between the waist and the head according to waist limb point information and head limb point information in skeleton information of posture estimation used when a first detection result is obtained; the first detection result is a detection result that has been identified as a fall behavior;
and 102, filtering the first detection result by using the relative position relation, reserving the first detection result meeting the condition, and removing the first detection result not meeting the condition.
Wherein the conditions are: the head is not located above the waist with the ground level as a frame of reference.
In order to facilitate understanding of the invention, specific examples are set forth below.
Example one
Referring to fig. 2, fig. 2 is a schematic diagram of a position relationship between a target image and a detection device for performing gesture recognition based on the image according to an embodiment. The detection device collects image data in a visual field range through an image collecting device.
Referring to fig. 3, fig. 3 is a schematic diagram of a framework of fall behavior detection of a target person according to an embodiment. And for T frames of the time sequence, respectively carrying out target detection, tracking, attribute recognition and attitude estimation on the basis of each frame, and carrying out attitude recognition on the attitude estimation of each frame through image volume integral type recognition, wherein T is a natural number more than or equal to 1.
Referring to fig. 4, fig. 4 is a schematic flow chart of fall behavior detection of a target person. The dotted line frame part is a processing flow of one frame, and the outside of the dotted line frame is processing of multiple frames. For any T-th frame image data in the T frames, the following steps are carried out:
step 401, performing target detection by using a deep learning target detection algorithm, so as to effectively identify position information of all targets in the image, and obtain at least one target frame information.
The deep learning target detection algorithm may be a yolo (you only look once) algorithm, a Faster R-CNN algorithm, or an SSD algorithm. Taking the YOLO algorithm as an example, the algorithm can output the category of each target and the corresponding position information in the image directly based on the input image; in the present embodiment, the YOLO algorithm is applied to output the individual persons included in the image and the positions thereof based on the input image.
Step 402, tracking and matching with a t-1 frame based on t-frame target position information, wherein tracking and matching can be performed according to a front-to-back frame intersection ratio, and when the intersection ratio is greater than a set threshold value, matching is successful, and a tracking target frame of the t frame is generated;
step 403, performing attribute recognition on the target in the tracking target frame through the classification neural network based on the tracking target frame of the t frame, for example, recognizing an old person and a non-old person, and for example, recognizing a person needing to be cared, in this case, training the classification neural network by using the image of the person needing to be cared, and then calling the trained classification neural network model to recognize the target in the tracking target frame;
further, an ID identification is set for the tracking target frame, and the attribute identification result of the tracking target frame is recorded.
Step 404, determining whether the target in the tracking target frame of the t frame is identified as a set attribute, for example, whether the target is an old person, if yes, executing step 405, otherwise, returning to step 403,
step 405, tracking a target frame in an image frame with a set attribute (such as old people), performing attitude estimation by an ALPHASPOSE algorithm, outputting target bone data information, including 18 bone limb points, three pieces of information of each limb point, namely pixel coordinates x and y and confidence coefficient information, and storing the bone data information into an input buffer queue for graph convolution classification.
Step 406, inputting the bone data information into the graph score type model, identifying through the graph score type model, executing step 407 if the output is a falling behavior result (a first detection result), otherwise, returning to step 401, or ending the falling behavior detection.
Step 407, generating a first line segment for connecting the waist limb point and the head limb point based on the waist limb point and the head limb point in the skeletal data information,
referring to fig. 5, fig. 5 is a schematic diagram of the head and waist position relationship, wherein the head is at the top and the waist is at the bottom in the non-falling state, and the first line segment connected to the waist and the head is perpendicular to the ground plane, i.e. the y direction in the world coordinate system as shown in the figure, and the first line segment connected to the waist and the head is no longer perpendicular to the ground in the falling state. The y direction under the world coordinate system is a first direction perpendicular to the ground plane.
Step 408, converting the pixel coordinates of the waist limb point and the head limb point into global coordinates under a world coordinate system respectively by using the image visual model to obtain the global coordinates of the waist limb point and the head limb point,
considering that the detected target always moves on the ground, and therefore the global z coordinate can be set to be 0, the global x and y coordinates of the waist limb point and the global x and y coordinates of the head limb point can be obtained by the following equations respectively:
Figure BDA0003105568100000061
wherein u and v are pixel coordinates of limb points, K is camera internal reference, R, T is camera external reference, and can be obtained by camera calibration in advance, and x isw、ywGlobal x, y coordinates of the limb points, respectively.
It should be understood that steps 407, 408 may also be transformed to: the method comprises the steps of firstly, converting pixel coordinates of a waist limb point and a head limb point into global coordinates under a world coordinate system by using an image visual model to obtain the global coordinates of the waist limb point and the head limb point, and then generating a first line segment for connecting the waist limb point and the head limb point according to the global coordinates of the waist limb point and the head limb point.
Step 409, calculating a first included angle between the first line segment and the y axis in the world coordinate system according to the global coordinate of the waist limb point and the global coordinate of the head limb point,
as shown in the geometric relationship in fig. 5, a first included angle between the first line segment and the y-axis in the world coordinate system is:
Figure BDA0003105568100000062
wherein x isw1、yw1Global coordinates, x, of the waist limbs, respectivelyw2、yw2Respectively the global coordinates of the head limb points,
Figure BDA0003105568100000063
is a first angle between the first line segment and the y-axis in the world coordinate system.
And step 410, judging whether a first included angle between the first line segment and a y axis in the world coordinate system is larger than a set first threshold value, if so, considering that the first detection result is reliable, and keeping the first detection result, otherwise, considering that the first detection result is false detection and filtering.
It should be understood that steps 409, 410 may also be transformed into: and calculating a third included angle between the first line segment and an x axis (ground plane) in a world coordinate system according to the global coordinate of the waist limb point and the global coordinate of the head limb point, and then judging whether the third included angle is smaller than a set third threshold value, if so, considering that the first detection result is reliable, keeping the first detection result, otherwise, considering that the first detection result is false detection, and filtering.
In the embodiment, the detected falling behavior results (first detection results) are screened according to the first included angle between the first line segment formed by the waist and the head and the first direction perpendicular to the ground plane, which is equivalent to that constraint conditions are added for the detected falling behavior results, so that the first detection results which do not conform to the constraint conditions can be excluded, thereby being beneficial to improving the reliability of falling behavior detection and reducing false detection.
Example two
In order to further improve the reliability of the falling behavior detection, the false detection of the falling behavior can be reduced by adopting multi-camera (detection equipment) linkage detection.
Referring to fig. 6, fig. 6 is a schematic view of a detection system composed of multiple detection devices in the two-multi-camera linkage detection method according to the embodiment. A plurality of cameras are respectively arranged, and preferably, the plurality of cameras are uniformly distributed in projection on the ground plane. Assuming that the number of the cameras is n, the projection range of the visual field range acquired by each camera on the ground plane is at least 360 degrees/n, wherein n is a natural number greater than 1. For example, when 4 cameras are used, the projection range of the field of view of the camera 1 on the ground plane is-45 ° to 45 °, the projection range of the field of view of the camera 2 on the ground plane is 45 ° to 135 °, the projection range of the field of view of the camera 3 on the ground plane is 135 ° to 225 °, and the projection range of the field of view of the camera 3 on the ground plane is 225 ° to 315 °.
Referring to fig. 7, fig. 7 is a schematic diagram of a framework of fall behavior detection according to a second embodiment. Each camera independently acquires images of the visual field range of the camera and independently detects falling behaviors, namely, for any camera, for T frames of a time sequence, target detection, tracking and posture estimation are respectively carried out on the basis of each frame, and posture identification is carried out on the posture estimation of each frame through image volume integral type identification, wherein T is a natural number which is more than or equal to 1. And each camera respectively filters the gesture recognition results, and combines the gesture recognition results retained after each camera is filtered to serve as a final detection result.
Referring to fig. 8, fig. 8 is a schematic flow chart of fall behavior detection according to the second embodiment. Either camera side of the detection system includes,
step 801, performing target detection by using a deep learning target detection algorithm so as to effectively identify position information of all targets in an image and obtain at least one target frame information.
The deep learning target detection algorithm may be a yolo (you only look once) algorithm, a Faster R-CNN algorithm, or an SSD algorithm. Taking the YOLO algorithm as an example, the algorithm can output the category of each target and the corresponding position information in the image directly based on the input image; in the present embodiment, the YOLO algorithm is applied to output the individual persons included in the image and the positions thereof based on the input image.
Step 802, tracking and matching with a t-1 frame based on target position information of the t frame, tracking and matching according to a front-to-back frame intersection ratio, and generating a tracking target frame of the t frame if the intersection ratio is greater than a set threshold value and the matching is successful;
and 803, performing attitude estimation on the tracking target frame through an ALPHPOSE algorithm, outputting target bone data information, including 18 bone limb points, wherein three pieces of information of each limb point are respectively coordinate x, y and confidence coefficient information, and storing the bone data information into an input buffer queue for graph convolution classification.
Step 804, inputting the bone data information into the graph volume integral type model, identifying through the graph volume integral type model, executing step 805 if the output is a falling behavior result (a first detection result), otherwise, returning to step 801, or ending the falling behavior detection.
Generating a directional line segment from the waist limb point to the head limb point based on the waist limb point and the head limb point in the skeletal data information, step 805,
referring to fig. 9, fig. 9 is a schematic diagram of simultaneous detection of two cameras. In the non-falling state, the two cameras acquire images with the head on the top and the waist on the bottom, and the directed line segment from the waist to the head is perpendicular to the ground plane, i.e. the y direction in the world coordinate system as shown in the figure, in the falling state 3, the directed line segment from the waist to the head is directed to the camera B for the camera B, the directed line segment from the waist to the head is deviated from the camera a for the camera a, in the falling state 2, the directed line segment from the waist to the head is deviated from the camera B for the camera B, and the directed line segment from the waist to the head is directed to the camera a for the camera a.
Step 806, converting the pixel coordinates of the waist limb point and the head limb point into global coordinates in a world coordinate system by using the image visual model to obtain the global coordinates of the waist limb point and the head limb point,
this step is the same as step 408.
And 807, judging whether the directed line segment is in the visual field range of the camera according to the global coordinate of the waist limb point and the global coordinate of the head limb point, if so, executing a step 808, otherwise, filtering a first detection result, and ending the process.
The condition of the directed line segment in the visual field range of the camera is as follows: the projection range of the directed line segment on the ground plane is in the projection range of the camera view range on the ground plane and is directed to the camera.
For example, in fig. 9, when the x-global coordinate of the head limb point is greater than the x-global coordinate of the waist limb point, a directed line segment is illustrated pointing toward camera B and away from camera a,
when the x global coordinate of the head limb point is less than the x global coordinate of the waist limb point, it is illustrated that the directed line segment points toward camera a and away from camera B,
considering that the projection range of the visual field range of the camera a on the ground plane is 90-270 degrees, and the projection range of the visual field range of the camera B on the ground plane is 90-90 degrees, when the person falls down to state 2, the camera a executes step 808; in the falling state 3, the step 808 is executed by the camera B, which is beneficial to reduce the system calculation amount and enable each detection device to only process the filtering of the first detection result of the falling direction pointing to the detection device.
Step 808, calculating a first angle between the directed line segment and the y axis in the world coordinate system according to the global coordinates of the waist limb point and the head limb point, which is the same as step 409.
Step 809, judging whether a first included angle between the directed line segment and a y axis in the world coordinate system is larger than a set first threshold value, if so, considering that the first detection result is reliable, reserving the first detection result, and sending the first detection result to the designated equipment, so that the designated equipment collects the first detection results from all cameras, outputs the collected first detection results, otherwise, considering that the first detection result is false detection, and filtering the first detection result.
The designated device may be any one of a plurality of cameras, or may be a network device, including but not limited to a server, a cloud, and the like.
Through the steps 805 to 809, when the directional line segment points to the detection device, the first detection result is reserved; and when the directional line segment deviates from the detection equipment, removing the first detection result.
In the embodiment, the camera associated with the directed line segment from the waist limb point to the head limb point is used for filtering the first detection result, and the associated camera can acquire more omnibearing images, so that the reliability of the detection result is improved; the detection result is further detected according to the condition of the first included angle between the directed line segment and the y axis in the world coordinate system, so that the false detection is favorably reduced, and particularly, the false detection of posture estimation such as standing, squatting, sitting, bending and the like can be effectively eliminated.
EXAMPLE III
Referring to fig. 10, fig. 10 is a schematic flow chart of the three-fall behavior detection in the present embodiment. Either camera side of the detection system includes,
steps 1001 to 1006 are the same as steps 801 to 806, respectively;
step 1007, calculating a first angle between the directed line segment and the y axis in the world coordinate system according to the global coordinates of the waist limb point and the head limb point, which is the same as step 409.
And step 1008, judging whether a first included angle between the directional line segment and a y axis in the world coordinate system is larger than a set first threshold value, if so, determining that the first detection result is reliable, reserving the first detection result, and sending the first detection result to the designated equipment, so that the designated equipment collects the first detection results from all the cameras, outputs the collected first detection results, otherwise, determining that the first detection result is false detection, and filtering.
The designated device may be any one of a plurality of cameras, or may be a network device, including but not limited to a server, a cloud, and the like.
In this embodiment, a plurality of cameras simultaneously carry out the behavior of tumbleing and detect, are favorable to the behavior of tumbleing to reduce and miss detection, and every camera all implements the filtration to its first testing result, has synthesized the first testing result of a plurality of cameras in other words, is favorable to reducing the false retrieval.
Example four
In view of the fact that the human body itself has a physiological curvature, especially for elderly people, the physiological curvature increases with increasing age. Referring to fig. 11, fig. 11 is a schematic diagram of a directional line segment from the waist to the head in a non-falling state and a falling state when the physiological bending of the human body is considered. It can be seen from the figure that due to the physiological curvature, the effective line segment has a first angle with the y-axis in the world coordinate system in the non-falling state. In order to reduce false detection of falling behavior, the method of the embodiment eliminates false detection caused by physiological bending.
Referring to fig. 12, fig. 12 is a schematic flow chart of the four-fall behavior detection according to the embodiment. Either camera side of the detection system includes,
steps 1201 to 1206 are the same as steps 801 to 806 or steps 1001 to 1006, respectively;
step 1207, calculating a first included angle between the directed line segment and the y axis in the world coordinate system according to the global coordinates of the waist limb point and the head limb point, and the step is the same as the step 409.
And 1208, judging whether the difference between the first included angle and the second included angle is larger than a set second threshold value, if so, determining that the first detection result is reliable, reserving the first detection result, and sending the first detection result to the appointed equipment, so that the appointed equipment collects the first detection results from all the cameras, outputs the collected first detection result, otherwise, determining that the first detection result is false detection, and filtering.
The second included angle is an included angle between a directed line segment from the waist limb point to the head limb point in the non-falling state and a y axis in the world coordinate system, and the second included angle can be obtained according to bone data information used when the first detection result is the non-falling behavior in the detection process, or can be obtained in advance when the posture of the locked detected target is estimated in the non-falling state (for example, the posture of standing and the like) and stored as the attribute.
This embodiment filters first testing result according to the difference between first contained angle and second contained angle, because the second contained angle can be different because of being detected the difference of target to can more accurately filter the first testing result of false retrieval, improved the reliability of system.
Referring to fig. 13, fig. 13 is a schematic view of a detection device according to an embodiment of the present application, in which an apparatus for reducing false fall detection is included, the apparatus includes,
the analysis module is used for determining the relative position relationship between the waist and the head according to the waist limb point information and the head limb point information in the skeleton information of the posture estimation used when the first detection result is obtained; the first detection result is a detection result recognized as a falling behavior;
a filtering module for filtering the first detection result by using the relative position relationship and reserving the first detection result meeting the condition,
wherein the conditions are: the head is not located above the waist with the ground level as a frame of reference.
The analysis module comprises a plurality of analysis modules,
a coordinate conversion submodule for mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates in a world coordinate system respectively to obtain global position information of the waist limb point and the head limb point,
and the calculation submodule is used for calculating a first included angle between a first line segment connected with the waist limb point and the head limb point and a first direction vertical to the ground plane according to the global position information of the waist limb point and the head limb point.
The filter module is further configured to: and when the first included angle is larger than a set first threshold value, the first detection result is reserved.
The analysis module may further comprise a processor configured to,
the falling direction identification submodule is used for determining the falling direction according to the direction of a directed line segment under a global coordinate system determined from the waist limb point to the head limb point;
the filter module is further configured to: when the directional line segment points to the detection equipment, the first detection result is reserved; and when the directional line segment deviates from the detection equipment, removing the first detection result.
Preferably, the analysis module further comprises a module for,
and the triggering submodule is used for outputting a triggering signal to the calculating submodule to trigger the calculating submodule to calculate the first included angle when the falling direction points to the detection equipment.
Referring to fig. 14, fig. 14 is another schematic view of the detection apparatus. The detection device comprises a memory and a processor, the memory stores a computer program, and the processor is configured to execute the computer program to realize the steps of the method for reducing the false fall detection of the fall behaviors.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and the computer program realizes the steps of the embodiment when being executed by a processor.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for reducing false fall detection is characterized in that the method comprises that, on the side of a detection device for gesture recognition based on images,
determining a relative position relationship between the waist and the head according to waist limb point information and head limb point information in skeleton information of posture estimation used when a first detection result is obtained; the first detection result is a detection result recognized as a falling behavior;
filtering the first detection result by using the relative position relationship, reserving the first detection result meeting the condition,
wherein the conditions are: the head is not located above the waist with the ground level as a frame of reference.
2. The method of claim 1, wherein the head is not positioned above the waist is determined based on a first angle between a first line segment connected to a global coordinate system defined by the waist limb point and the head limb point and a first direction perpendicular to the ground plane.
3. The method of claim 2, wherein determining the relative positional relationship between the waist and the head from the waist limb point information and the head limb point information of the skeleton information of the pose estimation used in obtaining the first detection result comprises:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates in a world coordinate system,
calculating a first included angle between a first line segment connected with the waist limb point and the head limb point and a first direction vertical to the ground plane according to the global coordinates of the waist limb point and the head limb point;
the filtering the first detection result by using the relative position relationship, and retaining the first detection result meeting the condition, includes:
and judging whether the first included angle is larger than a set first threshold value or not, and if so, keeping the first detection result.
4. The method according to claim 2, wherein the number of the detection devices is n, the projection range of the visual field range acquired by each detection device on the ground plane is the same, wherein n is a natural number greater than or equal to 2,
the head is not located above the waist as determined by the fall direction determined by the pointing of a directed line segment under a global coordinate system determined from the waist limb point to the head limb point,
the filtering the first detection result by using the relative position relationship, and retaining the first detection result meeting the condition, includes:
when the directional line segment points to the detection equipment, the first detection result is reserved;
and when the directional line segment deviates from the detection equipment, removing the first detection result.
5. The method of claim 4, wherein determining the relative positional relationship between the waist and the head from the waist limb point information and the head limb point information of the skeleton information of the pose estimation used in obtaining the first detection result comprises:
on either detection device side:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates under a world coordinate system,
judging whether the projection of the directional line segment on the ground plane is positioned in the projection range of the visual field range of the detection equipment on the ground plane and points to the detection equipment or not according to the global coordinate of the waist limb point and the global coordinate of the head limb point, if so, judging that the falling direction faces to the detection equipment, otherwise, judging that the falling direction deviates from the detection equipment;
when the falling direction faces the detection device, a first included angle between a first line segment connected with the waist limb point and the head limb point and a first direction vertical to the ground plane is calculated according to the global coordinates of the waist limb point and the head limb point.
6. The method of claim 4, wherein determining the relative positional relationship between the waist and the head from the waist limb point information and the head limb point information of the skeleton information of the pose estimation used in obtaining the first detection result comprises:
on either detection device side:
respectively mapping the pixel coordinates of the waist limb point information and the head limb point information to global coordinates under a world coordinate system,
and calculating a first included angle between a directed line segment from the waist limb point to the head limb point and a first direction vertical to the ground plane according to the global coordinates of the waist limb point and the head limb point.
7. The method of claim 5 or 6, wherein the retaining the first detection result when the directed line segment points to the detection device comprises:
and judging whether the first included angle is larger than a set first threshold value or not, if so, retaining the first detection result, and sending the first detection result to the appointed equipment, so that the appointed equipment collects the first detection results from all the detection equipment.
8. The method of claim 5 or 6, wherein the retaining the first detection result when the directed line segment points to the detection device comprises:
judging whether the difference between the first included angle and the second included angle is larger than a set second threshold value or not, if so, retaining the first detection result, and sending the first detection result to the appointed equipment, so that the appointed equipment collects the first detection results from all the detection equipment;
the second included angle is an included angle between a directed line segment from the waist limb point to the head limb point in a non-falling state and the first direction perpendicular to the ground plane.
9. A detection apparatus, comprising a memory storing a computer program and a processor configured to execute the computer program to perform the steps of a method of reducing false fall detection as claimed in any one of claims 1 to 8.
10. A detection system comprising n detection apparatuses according to claim 9, wherein n is a natural number equal to or greater than 2.
CN202110635639.7A 2021-06-08 2021-06-08 Method and detection system for reducing false detection of falling behaviors Active CN113378692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635639.7A CN113378692B (en) 2021-06-08 2021-06-08 Method and detection system for reducing false detection of falling behaviors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635639.7A CN113378692B (en) 2021-06-08 2021-06-08 Method and detection system for reducing false detection of falling behaviors

Publications (2)

Publication Number Publication Date
CN113378692A true CN113378692A (en) 2021-09-10
CN113378692B CN113378692B (en) 2023-09-15

Family

ID=77576424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635639.7A Active CN113378692B (en) 2021-06-08 2021-06-08 Method and detection system for reducing false detection of falling behaviors

Country Status (1)

Country Link
CN (1) CN113378692B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081619A (en) * 1995-07-19 2000-06-27 Matsushita Electric Industrial Co., Ltd. Movement pattern recognizing apparatus for detecting movements of human bodies and number of passed persons
US20090121881A1 (en) * 2002-11-21 2009-05-14 Anders Fredriksson Method and device for fall prevention and detection
CN103118647A (en) * 2010-09-22 2013-05-22 松下电器产业株式会社 Exercise assistance system
CN105448039A (en) * 2014-08-21 2016-03-30 昆山市华正电子科技有限公司 Tumble-detecting method decreasing false alarm rate
CN108629946A (en) * 2018-06-14 2018-10-09 清华大学深圳研究生院 A kind of tumble detection method for human body based on RGBD sensors
CN108652637A (en) * 2018-06-30 2018-10-16 源珈力医疗器材国际贸易(上海)有限公司 Wearable tumble prediction protection system and prediction method thereof
US20210059569A1 (en) * 2019-08-29 2021-03-04 Panasonic Intellectual Property Corporation Of America Fall risk evaluation method, fall risk evaluation device, and non-transitory computer-readable recording medium in which fall risk evaluation program is recorded
CN112698288A (en) * 2020-11-17 2021-04-23 芜湖美的厨卫电器制造有限公司 Method, device, processor, water heater and monitoring system for recognizing gesture
CN112906548A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Fall detection method and system based on edge calculation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081619A (en) * 1995-07-19 2000-06-27 Matsushita Electric Industrial Co., Ltd. Movement pattern recognizing apparatus for detecting movements of human bodies and number of passed persons
US20090121881A1 (en) * 2002-11-21 2009-05-14 Anders Fredriksson Method and device for fall prevention and detection
CN103118647A (en) * 2010-09-22 2013-05-22 松下电器产业株式会社 Exercise assistance system
CN105448039A (en) * 2014-08-21 2016-03-30 昆山市华正电子科技有限公司 Tumble-detecting method decreasing false alarm rate
CN108629946A (en) * 2018-06-14 2018-10-09 清华大学深圳研究生院 A kind of tumble detection method for human body based on RGBD sensors
CN108652637A (en) * 2018-06-30 2018-10-16 源珈力医疗器材国际贸易(上海)有限公司 Wearable tumble prediction protection system and prediction method thereof
US20210059569A1 (en) * 2019-08-29 2021-03-04 Panasonic Intellectual Property Corporation Of America Fall risk evaluation method, fall risk evaluation device, and non-transitory computer-readable recording medium in which fall risk evaluation program is recorded
CN112698288A (en) * 2020-11-17 2021-04-23 芜湖美的厨卫电器制造有限公司 Method, device, processor, water heater and monitoring system for recognizing gesture
CN112906548A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Fall detection method and system based on edge calculation

Also Published As

Publication number Publication date
CN113378692B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN110348335B (en) Behavior recognition method and device, terminal equipment and storage medium
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
WO2021227874A1 (en) Falling behaviour detection method and device
Delachaux et al. Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors
EP3284013A1 (en) Event detection and summarisation
US20200394384A1 (en) Real-time Aerial Suspicious Analysis (ASANA) System and Method for Identification of Suspicious individuals in public areas
CN111753643B (en) Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN112720464B (en) Target picking method based on robot system, electronic equipment and storage medium
JP2018206321A (en) Image processing device, image processing method and image processing program
CN110738650B (en) Infectious disease infection identification method, terminal device and storage medium
JP2019185752A (en) Image extracting device
JP2020135551A (en) Object recognition device, object recognition method and object recognition program
Dubois et al. Person identification from gait analysis with a depth camera at home
US20240282147A1 (en) Action recognition method, action recognition device, and non-transitory computer readable recording medium
Volkhardt et al. People tracking on a mobile companion robot
CN110334609B (en) Intelligent real-time somatosensory capturing method
CN112101235A (en) Old people behavior identification and detection method based on old people behavior characteristics
Nouredanesh et al. Chasing feet in the wild: a proposed egocentric motion-aware gait assessment tool
JPWO2021250808A5 (en)
Fosty et al. Event recognition system for older people monitoring using an RGB-D camera
CN113378692A (en) Method and detection system for reducing false detection of falling behavior
JPWO2021229751A5 (en)
CN115527265A (en) Motion capture method and system based on physical training
US20220383652A1 (en) Monitoring Animal Pose Dynamics from Monocular Images
Rasouli et al. Dynamic posture estimation in a network of depth sensors using sample points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant