CN114783052A - Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics - Google Patents

Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics Download PDF

Info

Publication number
CN114783052A
CN114783052A CN202210291201.6A CN202210291201A CN114783052A CN 114783052 A CN114783052 A CN 114783052A CN 202210291201 A CN202210291201 A CN 202210291201A CN 114783052 A CN114783052 A CN 114783052A
Authority
CN
China
Prior art keywords
video
key points
shoulder
frame
shoulders
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210291201.6A
Other languages
Chinese (zh)
Inventor
陈伟杰
张宇星
陈彦榕
毛梦林
沈文增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhijiang College of ZJUT
Original Assignee
Zhijiang College of ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhijiang College of ZJUT filed Critical Zhijiang College of ZJUT
Priority to CN202210291201.6A priority Critical patent/CN114783052A/en
Publication of CN114783052A publication Critical patent/CN114783052A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a human body multi-feature-based tumble identification method, a human body multi-feature-based tumble identification device, human body multi-feature-based tumble identification equipment and a storage medium. The method comprises the following steps: acquiring a video to be recognized, and inputting the video to be recognized into a preset video recognition model; the video to be recognized comprises a target person; acquiring position data of skeleton key points of the target person based on the video identification model; and judging whether the target task is in a falling state or not based on the position data of the bone key points. The embodiment of the application determines the difference value between the skeleton key points of different frame images in the video to be recognized through the preset model, determines whether the target character is in a falling state or not, can effectively distinguish two different states of lying down and falling, and improves the recognition accuracy.

Description

Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics
Technical Field
The application relates to the technical field of video processing, in particular to a tumbling identification method, a tumbling identification device, tumbling identification equipment and a storage medium based on human body multiple characteristics.
Background
Along with the verification of social aging, the son and woman go out to be under business, the solitary old man increases, because lack quick detection and timely medical response measure at home, it becomes one of the reasons threatening the solitary old man life safety to fall to, it is easy to fall to detect to carry out the erroneous judgement to actions such as lying, the rate of accuracy is not high, and also not accurate enough on the discernment to human physical state, lead to the condition that the unable accurate judgement old man fell, perhaps the condition that the erroneous recognition old man fell when the old man did not fall to, it is thus visible, prior art when falling suddenly or when non-arriving the action of waiting to lie to the target discerns, there is great erroneous judgement probability, need solve.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a falling identification method, a device, an electronic device and a computer readable storage medium based on human body multi-characteristics, which can accurately calibrate the key points of the target skeleton and have higher real-time performance and accuracy when applied to single-person detection (such as falling detection of solitary old people).
The application provides a fall identification method based on human body multiple characteristics in a first aspect, which comprises the following steps:
acquiring a video to be identified, and inputting the video to be identified into a preset video identification model; the video to be identified comprises a target person;
acquiring position data of skeleton key points of the target person based on the video identification model;
and judging whether the target task is in a falling state or not based on the position data of the bone key points.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
determining an ankle key point y-axis coordinate in an mth frame image in the video to be identified;
acquiring the y-axis coordinate of the key points of the shoulders in the mth frame of image;
and calculating a difference value between the y-axis coordinate of the key points of the shoulders and the y-axis coordinate of the key points of the ankles, and determining the height difference of the shoulders and the feet of the target task.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
acquiring x-axis coordinates of the key points of the shoulders in the mth frame of image;
and taking the difference of two values in the coordinate of the key point axes of the shoulders as the shoulder width of the target character.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
calculating the coordinates of the key points of the shoulders of the m + n frame by taking the m frame as a reference frame;
and calculating the difference value of the left side of the double-shoulder key point of the mth frame and the m + nth frame.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
judging the sizes of the shoulder height difference and the shoulder width value, and the size of the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame and half of the shoulder height difference;
and when the shoulder-foot height difference is smaller than the shoulder width value and/or the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame is smaller than half of the shoulder-foot height difference, judging that the target task is in a static falling state.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
acquiring coordinates of key points of shoulders in two continuous frames of images;
calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulders key points;
and when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
The application second aspect provides a fall recognition device based on human multi-feature, includes:
the video acquisition module is used for acquiring a video to be identified and inputting the video to be identified into a preset video identification model; the video to be recognized comprises a target person;
the key point identification module is used for acquiring position data of skeleton key points of the target person based on the video identification model;
and the falling judgment module is used for judging whether the target task is in a falling state or not based on the position data of the bone key points.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
determining an ankle key point y-axis coordinate in the mth frame image in the video to be identified;
acquiring the y-axis coordinate of the key point of the shoulders in the mth frame of image;
and calculating a difference value between the y-axis coordinate of the key points of the shoulders and the y-axis coordinate of the key points of the ankles, and determining the height difference of the shoulders and the feet of the target task.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
acquiring x-axis coordinates of the key points of the shoulders in the mth frame of image;
and taking the difference of two values in the axis coordinates of the key points of the shoulders as the shoulder width of the target character.
In this embodiment, the obtaining of the position data of the bone key points of the target person based on the video recognition model includes:
calculating the coordinates of the key points of the shoulders of the m + n frame by taking the m frame as a reference frame;
and calculating the difference value of the left side of the double-shoulder key point of the mth frame and the m + n th frame.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
judging the sizes of the shoulder height difference and the shoulder width value, and the size of the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame and half of the shoulder height difference;
and when the shoulder-foot height difference is smaller than the shoulder width value and/or the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame is smaller than half of the shoulder-foot height difference, judging that the target task is in a static falling state.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
acquiring coordinates of key points of shoulders in two continuous frames of images;
calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulder key points;
and when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
The embodiment of the application determines the difference value between the skeleton key points of different frame images in the video to be recognized through the preset model, determines whether the target character is in a falling state or not, can effectively distinguish two different states of lying down and falling, and improves the recognition accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a flow chart of a fall identification method based on multiple characteristics of a human body according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for obtaining a shoulder-foot height difference according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a method for obtaining shoulder width according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a method for obtaining a height difference between shoulders according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a static fall determination method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a static fall determination according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a dynamic fall determination method according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a dynamic fall determination process according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a fall recognition device based on multiple characteristics of a human body according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Along with the verification of social aging, the son and woman go out to be under business, the solitary old man increases, because lack quick detection and timely medical response measure at home, it becomes one of the reasons threatening the solitary old man life safety to fall to, it is easy to fall to detect to carry out the erroneous judgement to actions such as lying, the rate of accuracy is not high, and also not accurate enough on the discernment to human physical state, lead to the condition that the unable accurate judgement old man fell, perhaps the condition that the erroneous recognition old man fell when the old man did not fall to, it is thus visible, prior art when falling suddenly or when non-arriving the action of waiting to lie to the target discerns, there is great erroneous judgement probability, need solve.
In order to solve the above problems, the embodiment of the application provides a fall identification method based on human body multiple features, which can accurately calibrate target bone key points, and has higher real-time performance and accuracy when applied to single-person detection (such as fall detection of solitary old people).
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a fall recognition method based on multiple characteristics of a human body according to an embodiment of the present application.
Referring to fig. 1, the fall identification method based on human body multiple features provided in the embodiment of the present application includes:
step S101, acquiring a video to be identified, and inputting the video to be identified into a preset video identification model; the video to be identified comprises a target person;
step S102, acquiring position data of skeleton key points of the target person based on the video identification model;
step S103, judging whether the target task is in a falling state or not based on the position data of the bone key points.
In the embodiment of the application, the video to be recognized refers to a video acquired from a target user, the video to be recognized at least includes posture information of the target user, a moving video of the target user acquired in home monitoring, and the like, the video to be recognized is input into a preset video recognition model (media model), the video recognition model can recognize a target person in the video to be recognized, and position data of bone key points of the target person is acquired through the video recognition model, wherein the bone key points include shoulders and feet of the target person, and after the position data of the bone key points of a target task is acquired, whether the target person falls down is judged based on the position data of the bone key points.
As one possible implementation manner of the application, in the implementation manner, the active video of the target person is shot in real time through a monitoring device which is installed in advance, a preset video recognition model is loaded, and the shot video to be recognized is input into the monitoring deviceIn the video recognition model, variables including a shoulder width judgment threshold W are initializedn0 for shoulder width W, 0 for shoulder height difference V, 0 for both shoulders height D, and left ankle y-axis coordinate A _ lx0, right ankle y-axis coordinate A _ rxLeft shoulder y-axis coordinate S _ lx0, right shoulder y-axis coordinate S _ rx0, left shoulder x-axis coordinate S _ lwxLeft shoulder horizontal axis coordinate S _ rw of 0x0. And then calculating coordinate values of the bone key points of the target character among different frame images, and judging whether the target character falls down or not based on the coordinate values.
As a possible embodiment of the present application, in this embodiment, as shown in fig. 2, the obtaining of the position data of the skeletal key points of the target person based on the video recognition model includes:
step S201, determining an ankle key point y-axis coordinate in an mth frame image in the video to be identified;
step S202, acquiring the y-axis coordinate of the key point of the shoulders in the mth frame of image;
step S203, calculating a difference value between the y-axis coordinate of the double-shoulder key point and the y-axis coordinate of the ankle key point, and determining the shoulder-foot height difference of the target task.
As a possible embodiment of the present application, in this embodiment, as shown in fig. 3, the obtaining of the position data of the skeletal key points of the target person based on the video recognition model includes:
step S301, acquiring x-axis coordinates of the key points of the shoulders in the mth frame of image;
step S302, using a difference between two values in the coordinates of the key point axes of the shoulders as the shoulder width of the target person.
In this embodiment, as shown in fig. 4, the obtaining of the position data of the bone key points of the target person based on the video recognition model includes:
step S401, taking the mth frame as a reference frame, and calculating the coordinates of the key points of the shoulders of the mth + nth frame;
step S402, calculating the difference value of the left side of the double-shoulder key points of the mth frame and the (m + n) th frame.
As one possible embodiment of the present application, in this embodiment, as shown in fig. 5, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
step S501, judging the height difference of the shoulders and the width value of the shoulders, and the difference value of the left sides of the double-shoulder key points of the mth frame and the (m + n) th frame and the half of the height difference of the shoulders and the feet;
step S502, when the shoulder-foot height difference is smaller than the shoulder width value and/or the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame is smaller than half of the shoulder-foot height difference, determining that the target task is in a static falling state.
In the embodiment of the present application, as shown in fig. 6, a flow chart of a fall judgment method is provided, when identifying a video to be identified and judging whether a target person falls, a certain frame of image in the video to be identified is taken as a current frame, and an ankle key point y-axis coordinate of the current frame x is acquired: a _ lx、A_rx(ii) a Acquiring the coordinates of the key points of the shoulders of the current frame x: s _ lx、S_rx(ii) a And calculating the coordinate difference V of the key points of the two parts. Acquiring the coordinates of the cross axis of the key points of the shoulders of the current frame x: s _ lwx、S_rwx(ii) a The coordinate values of the transverse axes of the shoulders are differed to obtain the shoulder width W of the x framex(ii) a Judgment of Wx>WnIf the condition is satisfied, updating the value of W to be WxIf not, recalculating and judging the value transmitted by the subsequent frame. Taking the x frame as a reference frame, and recording the coordinates S _ l of the key points of the shoulders after 10 framesx-10、S_rx-10(ii) a Calculate S _ lxAnd s _ lx-10、S_rxAnd S _ rx-10The difference D of (a). Determining V, W, D the relationship of the three eigenvalues determines whether the static fall condition is satisfied: and when one condition of V < W or D < 0.5V is satisfied, judging that the condition is satisfied, and indicating that the target task is in a static falling state.
The embodiment of the application determines the difference value between the skeleton key points of different frame images in the video to be recognized through the preset model, determines whether the target character is in a falling state or not, can effectively distinguish two different states of lying down and falling, and improves the recognition accuracy.
As one possible embodiment of the present application, in this embodiment, as shown in fig. 7, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
step S701, acquiring coordinates of key points of shoulders in two continuous frames of images;
step S702, calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulders key points;
and S703, when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
In the embodiment of the present application, as shown in fig. 8, in order to ensure the accuracy of recognition and the comprehensiveness of fall recognition, a fall is divided into a static fall state and a dynamic fall state, wherein when the dynamic fall state is recognized, it is possible to determine whether a target character is in the dynamic fall state by calculating the shoulder longitudinal speed, an iteration variable t is set to 0, a longitudinal inter-frame speed S is set to 0, a difference between shoulder coordinate values of two consecutive frames in a video to be recognized is obtained, the shoulder longitudinal speed S of the target character is determined based on the difference and the time between the two frames, it is determined whether the condition that S is greater than or equal to 0.5W is satisfied, and if so, the t value is added by one; and clearing the value t once S does not meet the judgment condition. And when the t value exceeds the threshold value, judging that the dynamic falling condition is met, and indicating that the target character is in the falling process.
According to the embodiment of the application, the shoulder height of the target task between two continuous frames is continuously calculated, the sliding speed of the shoulder is calculated, when the shoulder slides continuously at a high speed, the target task is judged to be in a dynamic falling state, and the identification is more accurate.
Corresponding to the embodiment of the application function implementation method, the application also provides a human body multi-feature-based fall recognition device, electronic equipment and a corresponding embodiment.
Fig. 9 is a schematic structural diagram of a human body multi-feature-based fall recognition device according to an embodiment of the present application.
Referring to fig. 9, the fall recognition device based on human body multiple features according to the embodiment of the present application includes a video acquisition module 910, a key point recognition module 920, and a fall determination module 930, wherein:
the video acquiring module 910 is configured to acquire a video to be identified, and input the video to be identified into a preset video identification model; the video to be recognized comprises a target person;
a key point identification module 920, configured to obtain location data of skeletal key points of the target person based on the video identification model;
a fall determination module 930, configured to determine whether the target task is in a fall state based on the position data of the bone key point.
As a possible embodiment of the present application, in this embodiment, the obtaining of the location data of the skeletal key points of the target person based on the video recognition model includes:
determining an ankle key point y-axis coordinate in the mth frame image in the video to be identified;
acquiring the y-axis coordinate of the key points of the shoulders in the mth frame of image;
and calculating the difference value between the y-axis coordinate of the double-shoulder key point and the y-axis coordinate of the ankle key point, and determining the shoulder-foot height difference of the target task.
In this embodiment, the obtaining of the position data of the bone key points of the target person based on the video recognition model includes:
acquiring x-axis coordinates of the key points of the shoulders in the mth frame of image;
and taking the difference of two values in the coordinate of the key point axes of the shoulders as the shoulder width of the target character.
In this embodiment, the obtaining of the position data of the bone key points of the target person based on the video recognition model includes:
calculating the coordinates of the key points of the shoulders of the m + n frame by taking the m frame as a reference frame;
and calculating the difference value of the left side of the double-shoulder key point of the mth frame and the m + n th frame.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
judging the sizes of the shoulder height difference and the shoulder width value, and the size of the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame and half of the shoulder height difference;
and when the shoulder-foot height difference is smaller than the shoulder width value and/or the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame is smaller than half of the shoulder-foot height difference, judging that the target task is in a static falling state.
As a possible embodiment of the present application, in the embodiment, the determining whether the target task is in a fall state based on the position data of the bone key points includes:
acquiring coordinates of key points of shoulders in two continuous frames of images;
calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulders key points;
and when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
In the embodiment of the application, the video to be recognized refers to a video acquired from a target user, the video to be recognized at least includes posture information of the target user, a moving video of the target user acquired in home monitoring, and the like, the video to be recognized is input into a preset video recognition model (media model), the video recognition model can recognize a target person in the video to be recognized, and position data of bone key points of the target person is acquired through the video recognition model, wherein the bone key points include shoulders and feet of the target person, and after the position data of the bone key points of a target task is acquired, whether the target person falls down is judged based on the position data of the bone key points.
As one possible implementation manner of the application, in the implementation manner, the pre-installed monitoring device is used for shooting the activity video of the target person in real time, the preset video recognition model is loaded, the video to be recognized, which is shot in the implementation manner, is input into the video recognition model, and variables including the shoulder width judgment threshold value W are initializedn0 for shoulder width W, 0 for shoulder height difference V, 0 for shoulder height change D, 0 for left ankle y-axis coordinate A-lx0, right ankle y-axis coordinate A _ rxLeft shoulder y-axis coordinate S _ lx0, right shoulder y-axis coordinate S _ rxLeft shoulder x-axis coordinate S _ lw of 0xLeft shoulder horizontal axis coordinate S _ rw of 0x0. And then calculating coordinate values of the bone key points of the target character among different frame images, and judging whether the target character falls down or not based on the coordinate values.
In the embodiment of the present application, as shown in fig. 6, a flow chart of a fall judgment method is provided, when identifying a video to be identified and judging whether a target person falls, a certain frame of image in the video to be identified is taken as a current frame, and an ankle key point y-axis coordinate of the current frame x is acquired: a _ lx、A_rx(ii) a Acquiring the coordinates of the key points of the shoulders of the current frame x: s _ lx、S_rx(ii) a And calculating the coordinate difference V of the key points of the two parts. Acquiring the coordinate of the cross shaft of the key points of the shoulders of the current frame x: s _ lwx、S_rwx(ii) a The shoulder width W of the x frame is obtained by differentiating the coordinate values of the horizontal axes of the shouldersx(ii) a Determine Wx>WnIf the condition is satisfied, updating the value of W to be WxIf not, recalculating and judging the value transmitted by the subsequent frame. Taking the x frame as a reference frame, and recording the coordinates S _ l of the key points of the shoulders after 10 framesx-10、S_rx-10(ii) a Calculate S _ lxAnd S _ lx-10、S_rxAnd S _ rx-10The difference D of (a). Determining V, W, D the relationship of the three eigenvalues determines whether the static fall condition is satisfied: when one condition of V < W or D < 0.5V is satisfied, the task is judged to be satisfied, and the target task is in a static falling stateState.
The embodiment of the application determines the difference value between the skeleton key points of different frame images in the video to be recognized through the preset model, determines whether the target character is in a falling state or not, can effectively distinguish two different states of lying down and falling, and improves the recognition accuracy.
In the embodiment of the present application, as shown in fig. 8, to ensure the accuracy of recognition and the comprehensiveness of the recognition of a fall, the fall is divided into a static fall state and a dynamic fall state, wherein when the dynamic fall state is recognized, it is possible to determine whether a target person is in the dynamic fall state by calculating the shoulder longitudinal velocity, first setting an iteration variable t to 0, setting a longitudinal inter-frame velocity S to 0, obtaining a difference between shoulder coordinate values of two consecutive frames in a video to be recognized, determining the shoulder longitudinal velocity S of the target person based on the difference and the time between the two frames, determining whether a condition that S is greater than or equal to 0.5W is satisfied, and if so, adding one to the t value; and clearing the value t once the S does not meet the judgment condition. And when the t value exceeds the threshold value, judging that the dynamic falling condition is met, and indicating that the target character is in the falling process.
According to the embodiment of the application, the shoulder height of the target task between two continuous frames is continuously calculated, the gliding speed of the shoulder is calculated, when the shoulder slides continuously at a high speed, the target task is judged to be in a dynamic falling state, and the identification is more accurate.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Fig. 10 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 10, the electronic device 1000 includes a memory 1010 and a processor 1020.
The Processor 1020 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1010 may include various types of storage units, such as system memory, Read Only Memory (ROM), and a persistent storage device. The ROM may store, among other things, static data or instructions for the processor 1020 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at run-time. Further, the memory 1010 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, among others. In some embodiments, memory 1010 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1010 has stored thereon executable code that, when processed by the processor 1020, causes the processor 1020 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the steps of the above-described methods according to the present application.
The foregoing description of the embodiments of the present application has been presented for purposes of illustration and description and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A fall identification method based on human body multiple characteristics is characterized by comprising the following steps:
acquiring a video to be recognized, and inputting the video to be recognized into a preset video recognition model; the video to be recognized comprises a target person;
acquiring position data of skeleton key points of the target person based on the video recognition model;
and judging whether the target task is in a falling state or not based on the position data of the bone key points.
2. The multi-feature human body fall recognition method according to claim 1, wherein the step of obtaining the position data of the bone key points of the target person based on the video recognition model comprises the following steps:
determining an ankle key point y-axis coordinate in an mth frame image in the video to be identified;
acquiring the y-axis coordinate of the key points of the shoulders in the mth frame of image;
and calculating a difference value between the y-axis coordinate of the key points of the shoulders and the y-axis coordinate of the key points of the ankles, and determining the height difference of the shoulders and the feet of the target task.
3. The human body multi-feature-based fall recognition method according to claim 2, wherein the obtaining of the position data of the bone key points of the target person based on the video recognition model comprises:
acquiring x-axis coordinates of the key points of the shoulders in the mth frame of image;
and taking the difference of two values in the coordinate of the key point axes of the shoulders as the shoulder width of the target character.
4. The multi-feature human body fall recognition method according to claim 3, wherein the step of obtaining the position data of the bone key points of the target person based on the video recognition model comprises the following steps:
calculating the coordinates of the key points of the shoulders of the m + n frame by taking the m frame as a reference frame;
and calculating the difference value of the left side of the double-shoulder key point of the mth frame and the m + n th frame.
5. The multi-feature human body-based fall recognition method according to claim 4, wherein the step of determining whether the target task is in a fall state based on the position data of the bone key points comprises:
judging the sizes of the shoulder height difference and the shoulder width value, and the size of the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame and half of the shoulder height difference;
and when the shoulder-foot height difference is smaller than the shoulder width value and/or the difference value of the left side of the double-shoulder key point of the mth frame and the (m + n) th frame is smaller than half of the shoulder-foot height difference, judging that the target task is in a static falling state.
6. The multi-feature human body-based fall recognition method according to claim 4, wherein the step of determining whether the target task is in a fall state based on the position data of the bone key points comprises:
acquiring coordinates of key points of shoulders in two continuous frames of images;
calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulder key points;
and when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
7. A fall recognition device based on multiple characteristics of a human body, the device comprising:
the video acquisition module is used for acquiring a video to be identified and inputting the video to be identified into a preset video identification model; the video to be identified comprises a target person;
the key point identification module is used for acquiring position data of skeleton key points of the target person based on the video identification model;
and the falling judgment module is used for judging whether the target task is in a falling state or not based on the position data of the bone key points.
8. The multi-feature human body based fall recognition device of claim 7, wherein said device is further configured to:
acquiring coordinates of key points of shoulders in two continuous frames of images;
calculating the shoulder longitudinal speed based on the interval time of the two continuous frames of images and the coordinates of the two shoulder key points;
and when the longitudinal speed of the shoulder is greater than a preset threshold value, judging that the target task is in a dynamic falling state.
9. An apparatus, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-6.
10. A storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-6.
CN202210291201.6A 2022-03-23 2022-03-23 Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics Pending CN114783052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210291201.6A CN114783052A (en) 2022-03-23 2022-03-23 Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210291201.6A CN114783052A (en) 2022-03-23 2022-03-23 Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics

Publications (1)

Publication Number Publication Date
CN114783052A true CN114783052A (en) 2022-07-22

Family

ID=82425787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210291201.6A Pending CN114783052A (en) 2022-03-23 2022-03-23 Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics

Country Status (1)

Country Link
CN (1) CN114783052A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116553327A (en) * 2023-07-10 2023-08-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116553327A (en) * 2023-07-10 2023-08-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car
CN116553327B (en) * 2023-07-10 2023-09-08 通用电梯股份有限公司 Method and device for detecting falling of passengers in home elevator car

Similar Documents

Publication Publication Date Title
US20210118145A1 (en) Diagnostic imaging support system and diagnostic imaging apparatus
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN109800627A (en) The method for detecting abnormality and device of petroleum pipeline signal, equipment and readable medium
US10275682B2 (en) Information processing apparatus, information processing method, and storage medium
CN108229262B (en) Pornographic video detection method and device
CN109829371B (en) Face detection method and device
CN111444869A (en) Method and device for identifying wearing state of mask and computer equipment
CN114783052A (en) Tumbling identification method, device, equipment and storage medium based on human body multiple characteristics
CN111288986A (en) Motion recognition method and motion recognition device
CN111265841B (en) Swimming lap number determining method, device, equipment and storage medium
US20200211202A1 (en) Fall detection method, fall detection apparatus and electronic device
CN110555413B (en) Method and device for processing time sequence signal, equipment and readable medium
CN111274852A (en) Target object key point detection method and device
CN111860512B (en) Vehicle identification method, device, electronic equipment and computer readable storage medium
JPWO2021230316A5 (en)
CN110163032B (en) Face detection method and device
CN113807472B (en) Hierarchical target detection method and device
CN111079560A (en) Tumble monitoring method and device and terminal equipment
CN115937991A (en) Human body tumbling identification method and device, computer equipment and storage medium
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN112200711B (en) Training method and system of watermark classification model
JPWO2019201655A5 (en)
CN111814689A (en) Fire recognition network model training method, fire recognition method and related equipment
CN111968093A (en) Magnetic shoe surface defect detection method and device, electronic equipment and storage medium
WO2024131796A1 (en) Method for detecting validity of human body movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination