CN117523652A - Fall detection method and device, electronic equipment and storage medium - Google Patents

Fall detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117523652A
CN117523652A CN202210886760.1A CN202210886760A CN117523652A CN 117523652 A CN117523652 A CN 117523652A CN 202210886760 A CN202210886760 A CN 202210886760A CN 117523652 A CN117523652 A CN 117523652A
Authority
CN
China
Prior art keywords
point cloud
space
human body
falling
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210886760.1A
Other languages
Chinese (zh)
Inventor
葛鲁振
陶瑞涛
劳春峰
王东岳
赵辉
陈志富
赵庆海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Smart Technology R&D Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Smart Technology R&D Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Smart Technology R&D Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Smart Technology R&D Co Ltd
Priority to CN202210886760.1A priority Critical patent/CN117523652A/en
Publication of CN117523652A publication Critical patent/CN117523652A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Psychology (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention provides a fall detection method, a fall detection device, electronic equipment and a storage medium, and relates to the technical field of data processing. The method comprises the following steps: acquiring a full space point cloud of a space to be detected; performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud; based on the curvature feature vector, dividing the full-space point cloud to obtain a human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal. The embodiment provided by the invention can more accurately detect whether the human body falls, improve the probability of falling identification, reduce the probability of false early warning, and timely detect and early warn the falling of the human body.

Description

Fall detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a fall detection method, a fall detection device, an electronic device, and a storage medium.
Background
In some suitable places such as home environments and nursing homes, the old people fall down, and the falling consequences are usually serious. According to investigation and study: more than half of the elderly who have been injured to a medical facility visit are due to falls, and the main cause of traumatic fracture is falls. Therefore, the falling seriously threatens the health of the old, and timely discovery and treatment after the falling are key measures for guaranteeing the health of the old.
In related art, the main technical means of fall detection at present include RGB camera, RGBD camera, thermal infrared imaging, UWB, wi-Fi, 3D point cloud (including millimeter wave radar, lidar, structured light, toF camera), wearable scheme, wherein:
based on the detection scheme of the RGB camera, privacy protection of a user cannot be well considered;
the RGBD-based detection scheme cannot fully utilize the shape and position characteristics of the user and the environment, and cannot well consider the privacy protection of the user;
the falling detection scheme based on thermal infrared imaging has the problems that the falling detection scheme is easily influenced by heat source noise and is not suitable for environments such as bathrooms, kitchens and the like;
the fall detection scheme based on UWB has the problems of low frequency spectrum utilization rate and low transmission data rate of the pulse UWB system;
The fall detection scheme based on WiFi has the problems of poor recognition precision and higher false alarm rate and missing report rate;
the wearable fall detection device has the problems that the cost is high, a user is required to wear the designated device, the movement of the user is inconvenient and the like, and meanwhile, the wearable fall detection device is also easily limited by power consumption and communication transmission.
Therefore, the prior art has the problems of poor falling detection stability and low recognition accuracy in the home environment.
Disclosure of Invention
The invention provides a fall detection method, a fall detection device, electronic equipment and a storage medium, which can more accurately detect whether a human body falls, improve the probability of fall identification, reduce the probability of false early warning and timely detect and early warn the fall of the human body.
In a first aspect, the invention provides a fall detection method comprising:
acquiring a full space point cloud of a space to be detected;
performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud;
based on the curvature feature vector, dividing the full-space point cloud to obtain a human body point cloud;
extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud;
Judging whether the space to be detected has falling behaviors according to the human body characteristic information;
and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal.
Preferably, according to the fall detection method provided by the present invention, the acquiring the full space point cloud of the space to be detected includes:
collecting a plurality of three-dimensional point clouds of the space to be detected through a preset point cloud terminal;
performing splicing processing on the plurality of three-dimensional point clouds by using a preset splicing algorithm to obtain a complete point cloud of the space to be detected;
and carrying out downsampling treatment on the complete point cloud to obtain the full-space point cloud of the space to be detected.
Preferably, according to the fall detection method provided by the present invention, the dividing the full-space point cloud based on the curvature feature vector to obtain a human body point cloud includes:
performing dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full-space point cloud;
carrying out recognition processing on the target feature vector by using a preset support vector machine recognition algorithm to obtain a human feature vector;
And based on the human body feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud.
Preferably, according to the fall detection method provided by the present invention, the dividing the full-space point cloud based on the human body feature vector to obtain the human body point cloud includes:
based on the human body feature vector, screening the target feature vector to obtain an environment feature vector;
performing smoothing filtering processing based on position weighting on the human body characteristic vector and the environment characteristic vector to obtain a target human body characteristic vector and a target environment characteristic vector;
and according to the target human body feature vector and the target environment feature vector, dividing the full-space point cloud to obtain the human body point cloud corresponding to the target human body feature vector.
Preferably, according to the fall detection method provided by the present invention, the human body characteristic information at least includes: human body barycentric coordinate information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing first comparison processing on the human body barycentric coordinate information and a preset barycentric coordinate threshold value;
And if the barycentric coordinate information of the human body is smaller than the preset barycentric coordinate threshold value, determining that the falling behavior exists in the space to be detected.
Preferably, according to the fall detection method provided by the present invention, the human body characteristic information at least includes: global feature vector histogram information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing fall detection processing on the global feature vector histogram information by using a preset support vector machine;
if the falling state information is detected, determining that the falling behavior exists in the space to be detected.
Preferably, according to the fall detection method provided by the invention,
the human body characteristic information at least comprises: human body point cloud envelope box information;
the full-space point cloud at least comprises: a ground point cloud;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, extracting initial plane information in the extraction range according to a preset plane extraction algorithm;
Performing plane fitting processing on the initial plane information to obtain target plane information;
extracting normal vectors of the target plane information to obtain a first vector, and extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector;
acquiring measurement information of an included angle between the first vector and the second vector;
comparing the included angle measurement information with a preset measurement information threshold value;
and if the included angle measurement information is larger than the preset measurement information threshold value, determining that the falling behavior exists in the space to be detected.
Preferably, according to the method for detecting a fall provided by the present invention, the sending, to a user terminal, fall information corresponding to the fall behavior when it is determined that the space to be detected has the fall behavior includes:
under the condition that the falling behavior exists in the space to be detected, sending an inquiry command to a preset voice interaction terminal, so that the voice interaction terminal inquires a falling user according to the inquiry command;
and in a preset period, if a falling confirmation instruction fed back by the voice interaction terminal is received or any instruction fed back by the voice interaction terminal is not received, sending falling information corresponding to the falling behaviors to the user terminal.
In a second aspect, the invention also provides a fall detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring the full-space point cloud of the space to be detected;
the vector extraction module is used for carrying out feature extraction processing on the full-space point cloud and extracting curvature feature vectors corresponding to the full-space point cloud;
the segmentation module is used for carrying out segmentation processing on the full-space point cloud based on the curvature characteristic vector to obtain a human body point cloud;
the feature extraction module is used for extracting the human body point cloud and extracting human body feature information corresponding to the human body point cloud;
the judging module is used for judging whether the space to be detected has falling behaviors according to the human body characteristic information;
and the early warning module is used for sending the falling information corresponding to the falling behaviors to the user terminal under the condition that the falling behaviors exist in the space to be detected.
In a third aspect, the invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any one of the fall detection methods described above when the program is executed.
In a fourth aspect, the invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a fall detection method as described in any of the above.
In a fifth aspect, the invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a fall detection method as described in any of the above.
The invention provides a fall detection method, a fall detection device, electronic equipment and a storage medium, wherein full space point cloud of a space to be detected is obtained; performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud; based on the curvature feature vector, dividing the full-space point cloud to obtain the human point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal. The embodiment provided by the invention can more accurately detect whether the human body falls, improve the probability of falling identification, reduce the probability of false early warning, and timely detect and early warn the falling of the human body.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a fall detection method according to the present invention;
FIG. 2 is a schematic flow chart of step S100 in FIG. 1 according to the present invention;
FIG. 3 is a schematic flow chart of step S300 in FIG. 1 according to the present invention;
FIG. 4 is a flow chart of step S330 in FIG. 3 according to the present invention;
FIG. 5 is a schematic flow chart of step S600 in FIG. 1 according to the present invention;
fig. 6 is a schematic view of a spatial scenario in which a fall detection method is applied;
fig. 7 is a schematic structural diagram of a fall detection device according to the present invention;
fig. 8 is a schematic structural diagram of an electronic device provided by the present invention.
Reference numerals:
610: a point cloud terminal; 620: a voice interaction terminal; 630: and controlling the terminal.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With the development of 3D technology, the imaging quality of a 3D point cloud camera is more and more stable, voxels of the 3D point cloud are different from tens of points to billions of points, the real scene layout can be well restored no matter sparse point cloud or dense point cloud, and the color, shape and position of each object in the scene can be well represented. The method is widely applied to various fields such as scene reconstruction, SLAM navigation, unmanned aerial vehicle navigation, unmanned driving, robot hand-eye servo and the like.
The 3D point cloud not only contains color information of each object in the scene, but also contains position and shape characteristics, so that the real scene is restored more accurately, and the method can be applied to fall detection of personnel in a home environment due to higher openness of a corresponding algorithm, so that the problems of poor stability and low recognition rate caused by complex and changeable fall detection of the home environment are solved.
A fall detection method, apparatus, electronic device, and storage medium of the present invention are described below with reference to fig. 1-8.
A fall detection method according to an embodiment of the present invention will be described first, specifically by the following examples.
Fig. 1 is a schematic flow chart of a fall detection method according to an embodiment of the present invention, and the fall detection method may include, but is not limited to, steps S100 to S500.
S100, acquiring a full space point cloud of a space to be detected;
s200, carrying out feature extraction processing on the full-space point cloud, and extracting curvature feature vectors corresponding to the full-space point cloud;
s300, based on the curvature feature vector, dividing the full-space point cloud to obtain a human body point cloud;
s400, extracting the human body point cloud, and extracting human body characteristic information corresponding to the human body point cloud;
s500, judging whether falling behaviors exist in the space to be detected according to the human body characteristic information;
and S600, sending falling information corresponding to the falling behaviors to a user terminal under the condition that the falling behaviors exist in the space to be detected.
In step S100 of some embodiments, a full spatial point cloud of the space to be detected is acquired. It will be appreciated that the specific implementation steps may be: collecting a plurality of three-dimensional point clouds of the space to be detected through a preset point cloud terminal, performing splicing processing on the three-dimensional point clouds by utilizing a preset splicing algorithm to obtain a complete point cloud of the space to be detected, and performing downsampling processing on the complete point cloud to obtain the full-space point cloud of the space to be detected.
It should be noted that, the point cloud data is generally obtained by a 3D scanning device such as a laser radar to obtain information of a plurality of points in space, including XYZ position information, RGB color information, intensity information, and the like, and is a multi-dimensional complex data set. The 3D point cloud data may provide rich geometry, shape and scale information compared to the 2D image; and is not easily affected by changes in illumination intensity, shielding by other objects, and the like. Therefore, the 3D point cloud can well know the surrounding environment information of the 3D point cloud camera.
It should be further noted that, in the embodiment of the present invention, the 3D point cloud camera acquires the point cloud including only the home environment and the user location information, and does not include the color information of the point cloud.
In step S200 of some embodiments, feature extraction processing is performed on the full-space point cloud, and curvature feature vectors corresponding to the full-space point cloud are extracted.
It can be understood that, after the step S100 is performed to obtain the full space point cloud of the space to be detected, the specific performing steps may be: firstly, feature extraction processing is carried out on the full-space point cloud, curvature feature vectors corresponding to the full-space point cloud are extracted, and the curvature feature vectors are used for carrying out segmentation processing on the full-space point cloud based on the curvature feature vectors to obtain the human point cloud.
The curvature feature vector includes at least: principal curvature, gaussian curvature, mean curvature, fast point feature histogram, rotated image total 190-dimensional feature vector.
In step S300 of some embodiments, the full-space point cloud is segmented based on the curvature feature vector, so as to obtain a human body point cloud.
It may be understood that, after performing the feature extraction processing on the full-space point cloud in step S200 and extracting the curvature feature vector corresponding to the full-space point cloud, the specific performing steps may be: performing dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full-space point cloud;
carrying out recognition processing on the target feature vector by using a preset support vector machine recognition algorithm to obtain a human feature vector;
and based on the human body feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud.
In step S400 of some embodiments, the human body point cloud is subjected to extraction processing, and human body feature information corresponding to the human body point cloud is extracted.
It may be appreciated that, after the step S300 is performed to segment the full-space point cloud based on the curvature feature vector to obtain the human point cloud, the specific performing steps may be:
Firstly, when a plurality of users exist in the space to be detected, in order to eliminate the influence of the existence of the plurality of users on the human body fall detection, a Gaussian mixture model cluster recognition algorithm is further adopted to recognize single human body point clouds in the human body point clouds,
after identifying a single human point cloud, outlier removal is performed on the single human point cloud, as there may be some outliers in the human point cloud.
And extracting human body characteristic information corresponding to each human body point cloud by extracting each human body point cloud.
It should be noted that the human body characteristic information at least includes one of the following: human body barycentric coordinate information, global feature vector histogram information and human body point cloud envelope box information.
In step S500 of some embodiments, it is determined whether a falling behavior exists in the space to be detected according to the human body characteristic information.
It may be understood that, after the step S400 of extracting the human body point cloud and extracting the human body feature information corresponding to the human body point cloud, the specific implementation steps may be:
it should be noted that, according to the human body characteristic information, three judging modes exist for judging whether the space to be detected has a falling action:
The first judging mode is as follows:
when the human body characteristic information at least comprises: when the barycenter coordinate information of the human body is displayed,
performing first comparison processing on the human body barycentric coordinate information and a preset barycentric coordinate threshold value;
and if the barycentric coordinate information of the human body is smaller than the preset barycentric coordinate threshold value, determining that the falling behavior exists in the space to be detected.
The second judging mode is as follows:
when the human body characteristic information at least comprises: in the case of global feature vector histogram information,
performing fall detection processing on the global feature vector histogram information by using a preset support vector machine;
if the falling state information is detected, determining that the falling behavior exists in the space to be detected.
The third judging mode is as follows:
when the human body characteristic information at least comprises: when the human body point cloud envelops the box information,
the full-space point cloud at least comprises: a ground point cloud;
under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, extracting initial plane information in the extraction range according to a preset plane extraction algorithm;
performing plane fitting processing on the initial plane information to obtain target plane information;
extracting normal vectors of the target plane information to obtain a first vector, and extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector;
Acquiring measurement information of an included angle between the first vector and the second vector;
comparing the included angle measurement information with a preset measurement information threshold value;
and if the included angle measurement information is larger than the preset measurement information threshold value, determining that the falling behavior exists in the space to be detected.
According to the multiple judging modes, whether the user in the space to be detected has the falling behavior or not is determined, so that the accuracy of the falling detection method can be greatly improved, and the error rate is reduced.
In step S600 of some embodiments, if it is determined that the space to be detected has the fall behavior, the fall information corresponding to the fall behavior is sent to the user terminal.
It may be appreciated that after the step of determining whether the space to be detected has a falling behavior according to the human body feature information in the step S500 is performed, the specific performing steps may be: and under the condition that the falling behavior exists in the space to be detected, sending an inquiry command to a preset voice interaction terminal, so that the voice interaction terminal inquires a falling user according to the inquiry command, and in a preset period, sending falling information corresponding to the falling behavior to the user terminal if the falling confirmation command fed back by the voice interaction terminal is received or any command fed back by the voice interaction terminal is not received.
The embodiment provided by the invention can more accurately detect whether the human body falls, improve the probability of falling identification, reduce the probability of false early warning, and timely detect and early warn the falling of the human body.
In some embodiments, referring to fig. 2, step S100 may further include, but is not limited to, steps S210 to S230.
S210, acquiring a plurality of three-dimensional point clouds of the space to be detected through a preset point cloud terminal;
s220, performing splicing processing on the plurality of three-dimensional point clouds by using a preset splicing algorithm to obtain a complete point cloud of the space to be detected;
s230, performing downsampling processing on the complete point cloud to obtain the full-space point cloud of the space to be detected.
In step S210 of some embodiments, a plurality of three-dimensional point clouds of the space to be detected are collected by a preset point cloud terminal. It can be understood that the preset point cloud terminal can be a 3D point cloud camera, and a plurality of 3D point cloud cameras are arranged at different positions of the space to be detected, so as to collect a plurality of three-dimensional point clouds in the space to be detected in real time.
Further, in order to further eliminate the collection blind area of the 3D point cloud camera, a mode that a plurality of 3D point cloud cameras work synchronously is adopted in the same closed space.
Furthermore, the 3D point cloud cameras can adopt a ceiling-mounted installation mode, and the acquisition of the point cloud data by the plurality of 3D point cloud cameras is completely synchronous.
It should be noted that the space to be detected may be: the living environment can be bedrooms, living rooms, restaurants, balconies, kitchens, bathrooms and other scenes, and can be also used for nursing institutions, communities, hotels, shops and other scenes.
It should be noted that, the device or the method for acquiring the three-dimensional point cloud at least includes: millimeter wave radar, laser radar, realsense, kinect, x-box, binocular camera, multi-camera, structured light, 3D ToF camera, multiple 2D images adopt Structure from Motion algorithm to obtain three-dimensional point cloud.
In step S220 of some embodiments, a preset stitching algorithm is utilized to stitch the plurality of three-dimensional point clouds, so as to obtain a complete point cloud of the space to be detected.
It may be appreciated that after the step of collecting the plurality of three-dimensional point clouds of the space to be detected by the preset point cloud terminal in the step S210 is performed, the specific performing steps may be:
and (2) performing splicing (point cloud registration and point cloud registration) on the plurality of three-dimensional point clouds acquired in the step (210) by using a preset splicing algorithm to obtain a complete point cloud of the space to be detected.
It should be noted that the preset splicing algorithm at least comprises iteration closest point (ICP, iterative Closest Point), normal distribution transformation (NDT, normal Distribution Transform), and deep learning method (including: pointNetLK, DCP, IDAM, RPM-Net, 3 DRegNet).
In step S230 of some embodiments, the complete point cloud is downsampled to obtain the full-space point cloud of the space to be detected.
It may be understood that after the step S220 is performed to perform the stitching process on the plurality of three-dimensional point clouds by using a preset stitching algorithm to obtain the complete point cloud of the space to be detected, the specific performing steps may be:
in order to further reduce the operation complexity, the complete point cloud is subjected to downsampling processing to form a sparse full-space point cloud of the space to be detected.
In some embodiments, referring to fig. 3, step S300 may further include, but is not limited to, steps S310 to S330.
S310, performing dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full space point cloud;
s320, carrying out recognition processing on the target feature vector by using a preset support vector machine recognition algorithm to obtain a human feature vector;
And S330, based on the human body feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud.
In step S310 of some embodiments, a preset principal component analysis algorithm is used to perform a dimension reduction process on the curvature feature vector, so as to obtain a target feature vector corresponding to the full-space point cloud.
It will be appreciated that the specific implementation steps may be: and performing dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full-space point cloud.
Note that PCA (Principal Component Analysis), i.e., the principal component analysis algorithm, is one of the most widely used data dimension reduction algorithms. The main idea of PCA is to map n-dimensional features onto k-dimensions, which are completely new orthogonal features, also called principal components, and are k-dimensional features reconstructed on the basis of the original n-dimensional features. PCA works by sequentially finding a set of mutually orthogonal axes from the original space, the selection of which is closely related to the data itself. The first new coordinate axis is selected to be the direction with the maximum variance in the original data, the second new coordinate axis is selected to be the plane orthogonal to the first coordinate axis so as to make the variance maximum, and the third axis is selected to be the plane orthogonal to the 1 st and 2 nd axes so as to make the variance maximum. By analogy, n such coordinate axes may be obtained. The new axes obtained in this way have found that most of the variance is contained in the first k axes and that the latter axes contain almost 0 variance. Thus, we can ignore the remaining axes, leaving only the first k axes with the vast majority of variances. In fact, this amounts to retaining only dimensional features containing a substantial portion of variance, while ignoring feature dimensions containing variances of almost 0, achieving dimension reduction of the data features.
In step S320 of some embodiments, a preset support vector machine recognition algorithm is used to perform recognition processing on the target feature vector, so as to obtain a human feature vector.
It may be understood that, after the step of performing the step S310 to perform the dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain the target feature vector corresponding to the full space point cloud, the specific performing step may be to perform the recognition processing on the target feature vector obtained in the step S310 by using a preset support vector machine recognition algorithm to obtain the human feature vector.
It should be noted that, in the support vector machine recognition algorithm, the support vector machine is a generalized linear classifier that performs binary classification on data according to a supervised learning mode, the decision boundary is the maximum margin hyperplane for solving the learning sample, and the support vector machine calculates the experience risk by using the hinge loss function and adds a regularization term in the solving system to optimize the structural risk, so that the support vector machine is a classifier with sparsity and robustness.
In step S330 of some embodiments, the human body point cloud is obtained by performing a segmentation process on the full-space point cloud based on the human body feature vector.
It may be understood that, after the step S320 of performing the recognition processing on the target feature vector by using the preset support vector machine recognition algorithm to obtain the human feature vector, the specific performing steps may be: and screening the target feature vector based on the human feature vector to obtain an environment feature vector, performing smoothing filtering processing based on position weighting on the human feature vector and the environment feature vector to obtain a target human feature vector and a target environment feature vector, and performing segmentation processing on the full-space point cloud according to the target human feature vector and the target environment feature vector to obtain the human point cloud corresponding to the target human feature vector.
In some embodiments, referring to fig. 4, step S330 may further include, but is not limited to, steps S410 to S430.
S410, screening the target feature vector based on the human feature vector to obtain an environment feature vector;
s420, performing smoothing filtering processing based on position weighting on the human body characteristic vector and the environment characteristic vector to obtain a target human body characteristic vector and a target environment characteristic vector;
And S430, according to the target human body feature vector and the target environment feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud corresponding to the target human body feature vector.
In step S410 of some embodiments, based on the human feature vector, a filtering process is performed on the target feature vector to obtain an environmental feature vector.
It can be understood that the target feature vector at least includes a human feature vector and an environmental feature vector, and the human feature vector obtained in step S330 is filtered to obtain the environmental feature vector.
In step S420 of some embodiments, a smoothing filtering process based on position weighting is performed on the human body feature vector and the environmental feature vector, so as to obtain a target human body feature vector and a target environmental feature vector.
It may be understood that, after the step of performing the step S410 to filter the target feature vector based on the human feature vector to obtain the environmental feature vector, the specific implementation steps may be: and performing smoothing filtering processing based on position weighting on the human body characteristic vector and the environment characteristic vector to obtain a target human body characteristic vector and a target environment characteristic vector.
In step S430 of some embodiments, the full-space point cloud is segmented according to the target human body feature vector and the target environment feature vector, so as to obtain the human body point cloud corresponding to the target human body feature vector.
It may be understood that, after the step S420 of performing the smoothing filtering processing based on the position weighting on the human body feature vector and the environment feature vector to obtain the target human body feature vector and the target environment feature vector, the specific performing steps may be: and according to the target human body feature vector and the target environment feature vector, dividing the full-space point cloud to obtain the human body point cloud corresponding to the target human body feature vector.
In some embodiments, the human body characteristic information includes at least: human body barycentric coordinate information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing first comparison processing on the human body barycentric coordinate information and a preset barycentric coordinate threshold value;
and if the barycentric coordinate information of the human body is smaller than the preset barycentric coordinate threshold value, determining that the falling behavior exists in the space to be detected.
It may be appreciated that in some embodiments, after the step of extracting the human body characteristic information corresponding to the human body point cloud by performing the extraction processing on the human body point cloud, performing a first comparison processing on the extracted human body barycentric coordinate information and a preset barycentric coordinate threshold, and if the human body barycentric coordinate information is smaller than the preset barycentric coordinate threshold, determining that a single user in the space to be detected has a falling behavior.
Further, the ordinate value of the human body center coordinate information and the ordinate threshold value of the preset barycentric coordinate threshold value are compared.
Naturally, if the ordinate value of the human barycentric coordinate information is greater than or equal to the ordinate threshold of the preset barycentric coordinate threshold, the three-dimensional point cloud is acquired again.
In some embodiments, the human body characteristic information includes at least: global feature vector histogram information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing fall detection processing on the global feature vector histogram information by using a preset support vector machine;
if the falling state information is detected, determining that the falling behavior exists in the space to be detected.
It may be appreciated that in some embodiments, after the step of extracting the human body feature information corresponding to the human body point cloud by performing extraction processing on the human body point cloud, a fall detection process is performed on the extracted global feature vector histogram information by using a preset support vector machine, and if fall state information is detected, it is determined that the fall behavior exists in the space to be detected.
If no fall status information is detected, the three-dimensional point cloud is re-acquired.
In some embodiments, the human body characteristic information includes at least: human body point cloud envelope box information;
the full-space point cloud at least comprises: a ground point cloud;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, extracting initial plane information in the extraction range according to a preset plane extraction algorithm;
performing plane fitting processing on the initial plane information to obtain target plane information;
extracting normal vectors of the target plane information to obtain a first vector, and extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector;
Acquiring measurement information of an included angle between the first vector and the second vector;
comparing the included angle measurement information with a preset measurement information threshold value;
and if the included angle measurement information is larger than the preset measurement information threshold value, determining that the falling behavior exists in the space to be detected.
In some embodiments, after the step of extracting the human body point cloud and extracting the human body feature information corresponding to the human body point cloud, first extracting initial plane information in the extraction range according to a preset plane extraction algorithm under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, performing plane fitting processing on the initial plane information to obtain target plane information, extracting a normal vector of the target plane information to obtain a first vector, extracting a direction vector of a long side direction of the human body point cloud envelope box information to obtain a second vector, and obtaining angle measurement information between the first vector and the second vector.
Further, comparing the included angle measurement information with a preset measurement information threshold, and if the included angle measurement information is larger than the preset measurement information threshold, determining that the falling behavior exists in the space to be detected.
It should be noted that if the measurement information of the included angle is greater than the preset measurement information threshold, the three-dimensional point cloud is collected again.
Further, the extraction of the ground point cloud, the plane fitting of the ground point cloud and the extraction of the ground plane normal vector can be performed by extracting the plane fitting of the plane point cloud of a certain wall surface and the wall surface point cloud and the normal vector extraction, or the plane normal vector can be directly given after the calibration of a 3D point cloud camera.
In some embodiments, referring to fig. 5, step S600 may further include, but is not limited to, steps S510 to S520.
S510, under the condition that the falling behavior exists in the space to be detected, sending an inquiry command to a preset voice interaction terminal, so that the voice interaction terminal inquires a falling user according to the inquiry command;
s520, if a falling confirmation instruction fed back by the voice interaction terminal is received or any instruction fed back by the voice interaction terminal is not received in a preset period, the falling information corresponding to the falling behavior is sent to the user terminal.
In step S510 of some embodiments, if it is determined that the space to be detected has the falling behavior, a query command is sent to a preset voice interaction terminal, so that the voice interaction terminal queries a falling user according to the query command.
It can be appreciated that, in the case where it is determined that the user in the space to be detected has a falling action according to any one of the above, the controller sends a query command to a preset voice interaction terminal, and the voice interaction terminal performs a query, such as a query "do you really fall? "
It should be noted that the voice interaction terminal may be an intelligent sound box.
The voice interaction terminal, the controller and the 3D point cloud camera can communicate with each other through Wi-Fi networking to transmit information.
In step S520 of some embodiments, if a fall confirmation instruction fed back by the voice interaction terminal is received or any instruction fed back by the voice interaction terminal is not received in a preset period, the fall information corresponding to the fall behavior is sent to the user terminal.
It may be understood that, after step S510 is performed and it is determined that the space to be detected has the falling behavior, a query command is sent to a preset voice interaction terminal, so that the voice interaction terminal performs a query to a falling user according to the query command, the specific performing steps may be:
When the voice information fed back by the user is received by the voice interaction terminal, namely the intelligent sound box, in a preset period, for example, within 3 minutes of preset time, for example, the voice information is converted into a user falling confirmation instruction, the user falling confirmation instruction is transmitted to the controller, and the falling information corresponding to the falling behaviors is sent to other user terminals when the controller receives the user falling confirmation instruction fed back by the voice interaction terminal so as to perform falling early warning.
Of course, in the preset time of 3 minutes, the voice interaction terminal, i.e. the intelligent sound box, receives voice information fed back by the user, for example, "i do not fall over", converts the voice information into a command for confirming that the user does not fall over, and transmits the command for confirming that the user does not fall over to the controller, and when the controller receives the command for confirming that the user does not fall over fed back by the voice interaction terminal, the 3D point cloud camera is utilized to continuously collect a plurality of three-dimensional point clouds of the space to be collected.
Further, after the preset time is 3 minutes, if the voice interaction terminal, namely the intelligent sound box, does not receive any voice information fed back by the user, the voice interaction terminal directly transmits a falling early warning instruction to the controller, and if the controller receives the falling early warning instruction fed back by the voice interaction terminal, the falling information corresponding to the falling behavior is sent to other user terminals so as to perform falling early warning.
In some embodiments, referring to fig. 6, a schematic diagram of a spatial scenario in which a fall detection method is applied is provided in the present invention. The three-dimensional point cloud of the space to be detected is acquired in real time by utilizing a point cloud terminal 610, a preset splicing algorithm is utilized to splice a plurality of three-dimensional point clouds, a complete point cloud of the space to be detected is obtained, then downsampling processing is carried out on the complete point cloud, a full-space point cloud of the space to be detected is obtained, then preprocessing is carried out on the full-space point cloud, and a human body point cloud is obtained from the full-space point cloud.
Extracting the human body point cloud, and extracting human body characteristic information corresponding to the human body point cloud, wherein the human body characteristic information at least comprises: human body point cloud envelope box information; the full-space point cloud at least comprises: a ground point cloud.
Under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, extracting initial plane information in the extraction range according to a preset plane extraction algorithm, and performing plane fitting processing on the initial plane information to obtain target plane information; extracting normal vectors of the target plane information to obtain a first vector oq, extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector op, obtaining angle measurement information between the first vector oq and the second vector op, comparing the angle measurement information with a preset measurement information threshold, and determining that the falling behavior exists in the space to be detected if the angle measurement information is larger than the preset measurement information threshold.
The control terminal 630 sends an inquiry command to a preset voice interaction terminal 620 when determining that the space to be detected has the falling action, so that the voice interaction terminal 620 inquires the falling user according to the inquiry command, and if a falling confirmation command fed back by the voice interaction terminal 620 is received or any command fed back by the voice interaction terminal 620 is not received in a preset period, the falling information corresponding to the falling action is sent to the user terminal.
The invention provides a fall detection method, which comprises the steps of obtaining a full space point cloud of a space to be detected; performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud; based on the curvature feature vector, dividing the full-space point cloud to obtain the human point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal. The embodiment provided by the invention can more accurately detect whether the human body falls, improve the probability of falling identification, reduce the probability of false early warning, and timely detect and early warn the falling of the human body.
A fall detection device according to the present invention will be described below, and a fall detection device according to the present invention and a fall detection method according to the present invention will be described with reference to the following drawings.
Referring to fig. 7, a fall detection apparatus, the apparatus comprising:
an acquisition module 710, configured to acquire a full space point cloud of a space to be detected;
the vector extraction module 720 is configured to perform feature extraction processing on the full-space point cloud, and extract a curvature feature vector corresponding to the full-space point cloud;
the segmentation module 730 is configured to perform segmentation processing on the full-space point cloud based on the curvature feature vector, so as to obtain a human point cloud;
the feature extraction module 740 is configured to perform extraction processing on the human body point cloud, and extract human body feature information corresponding to the human body point cloud;
the judging module 750 is configured to judge whether a falling behavior exists in the space to be detected according to the human body feature information;
and the early warning module 760 is configured to send, to the user terminal, fall information corresponding to the fall behavior when it is determined that the fall behavior exists in the space to be detected.
The invention provides a fall detection device, an acquisition module 710, which is specifically configured to acquire a plurality of three-dimensional point clouds of a space to be detected through a preset point cloud terminal;
Performing splicing processing on the plurality of three-dimensional point clouds by using a preset splicing algorithm to obtain a complete point cloud of the space to be detected;
and carrying out downsampling treatment on the complete point cloud to obtain the full-space point cloud of the space to be detected.
The invention provides a fall detection device, a segmentation module 730, which is specifically configured to perform dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full space point cloud;
carrying out recognition processing on the target feature vector by using a preset support vector machine recognition algorithm to obtain a human feature vector;
and based on the human body feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud.
The invention provides a fall detection device, a segmentation module 730, which is specifically configured to screen the target feature vector based on the human feature vector to obtain an environmental feature vector;
performing smoothing filtering processing based on position weighting on the human body characteristic vector and the environment characteristic vector to obtain a target human body characteristic vector and a target environment characteristic vector;
and according to the target human body feature vector and the target environment feature vector, dividing the full-space point cloud to obtain the human body point cloud corresponding to the target human body feature vector.
The invention provides a fall detection device, the human body characteristic information at least comprises: human body barycentric coordinate information;
the judging module 750 is specifically configured to perform a first comparison process on the barycentric coordinate information of the human body and a preset barycentric coordinate threshold value;
and if the barycentric coordinate information of the human body is smaller than the preset barycentric coordinate threshold value, determining that the falling behavior exists in the space to be detected.
The invention provides a fall detection device, the human body characteristic information at least comprises: global feature vector histogram information;
the judging module 750 is specifically configured to perform fall detection processing on the global feature vector histogram information by using a preset support vector machine;
if the falling state information is detected, determining that the falling behavior exists in the space to be detected.
The invention provides a fall detection device, the human body characteristic information at least comprises: human body point cloud envelope box information;
the full-space point cloud at least comprises: a ground point cloud;
the judging module 750 is specifically configured to, when the ground point cloud extraction range is determined according to a preset distance filter, extract initial plane information in the extraction range according to a preset plane extraction algorithm;
Performing plane fitting processing on the initial plane information to obtain target plane information;
extracting normal vectors of the target plane information to obtain a first vector, and extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector;
acquiring measurement information of an included angle between the first vector and the second vector;
comparing the included angle measurement information with a preset measurement information threshold value;
and if the included angle measurement information is larger than the preset measurement information threshold value, determining that the falling behavior exists in the space to be detected.
The invention provides a fall detection device, an early warning module 760, which is specifically configured to send an inquiry command to a preset voice interaction terminal when it is determined that the space to be detected has the fall behavior, so that the voice interaction terminal inquires a fall user according to the inquiry command;
and in a preset period, if a falling confirmation instruction fed back by the voice interaction terminal is received or any instruction fed back by the voice interaction terminal is not received, sending falling information corresponding to the falling behaviors to the user terminal.
The invention provides a fall detection device, which is used for acquiring a full space point cloud of a space to be detected; performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud; based on the curvature feature vector, dividing the full-space point cloud to obtain the human point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal. The embodiment provided by the invention can more accurately detect whether the human body falls, improve the probability of falling identification, reduce the probability of false early warning, and timely detect and early warn the falling of the human body.
Fig. 8 illustrates a physical structure diagram of an electronic device, as shown in fig. 8, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 can invoke logic instructions in the memory 830 to perform a fall detection method comprising: acquiring a full space point cloud of a space to be detected; preprocessing the full-space point cloud to obtain a human body point cloud from the full-space point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing a fall detection method as provided by the methods above, the method comprising: acquiring a full space point cloud of a space to be detected; preprocessing the full-space point cloud to obtain a human body point cloud from the full-space point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal.
In yet another aspect, the invention provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a fall detection method provided by the methods above, the method comprising: acquiring a full space point cloud of a space to be detected; preprocessing the full-space point cloud to obtain a human body point cloud from the full-space point cloud; extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud; judging whether the space to be detected has falling behaviors according to the human body characteristic information; and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A fall detection method, comprising:
acquiring a full space point cloud of a space to be detected;
performing feature extraction processing on the full-space point cloud to extract curvature feature vectors corresponding to the full-space point cloud;
based on the curvature feature vector, dividing the full-space point cloud to obtain a human body point cloud;
extracting the human body point cloud to extract human body characteristic information corresponding to the human body point cloud;
judging whether the space to be detected has falling behaviors according to the human body characteristic information;
and under the condition that the falling behavior exists in the space to be detected, sending falling information corresponding to the falling behavior to a user terminal.
2. A fall detection method as claimed in claim 1, wherein the acquiring a full spatial point cloud of the space to be detected comprises:
collecting a plurality of three-dimensional point clouds of the space to be detected through a preset point cloud terminal;
performing splicing processing on the plurality of three-dimensional point clouds by using a preset splicing algorithm to obtain a complete point cloud of the space to be detected;
and carrying out downsampling treatment on the complete point cloud to obtain the full-space point cloud of the space to be detected.
3. The fall detection method according to claim 1, wherein the dividing the full-space point cloud based on the curvature feature vector to obtain a human point cloud includes:
performing dimension reduction processing on the curvature feature vector by using a preset principal component analysis algorithm to obtain a target feature vector corresponding to the full-space point cloud;
carrying out recognition processing on the target feature vector by using a preset support vector machine recognition algorithm to obtain a human feature vector;
and based on the human body feature vector, carrying out segmentation processing on the full-space point cloud to obtain the human body point cloud.
4. A fall detection method as claimed in claim 3, wherein the dividing the full-space point cloud based on the human feature vector to obtain the human point cloud comprises:
based on the human body feature vector, screening the target feature vector to obtain an environment feature vector;
performing smoothing filtering processing based on position weighting on the human body characteristic vector and the environment characteristic vector to obtain a target human body characteristic vector and a target environment characteristic vector;
And according to the target human body feature vector and the target environment feature vector, dividing the full-space point cloud to obtain the human body point cloud corresponding to the target human body feature vector.
5. A fall detection method as claimed in claim 1, wherein the body characteristic information comprises at least: human body barycentric coordinate information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing first comparison processing on the human body barycentric coordinate information and a preset barycentric coordinate threshold value;
and if the barycentric coordinate information of the human body is smaller than the preset barycentric coordinate threshold value, determining that the falling behavior exists in the space to be detected.
6. A fall detection method as claimed in claim 1, wherein the body characteristic information comprises at least: global feature vector histogram information;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
performing fall detection processing on the global feature vector histogram information by using a preset support vector machine;
if the falling state information is detected, determining that the falling behavior exists in the space to be detected.
7. A fall detection method as claimed in claim 1, wherein,
the human body characteristic information at least comprises: human body point cloud envelope box information;
the full-space point cloud at least comprises: a ground point cloud;
judging whether the space to be detected has falling behaviors according to the human body characteristic information comprises the following steps:
under the condition that the extraction range of the ground point cloud is determined according to a preset distance filter, extracting initial plane information in the extraction range according to a preset plane extraction algorithm;
performing plane fitting processing on the initial plane information to obtain target plane information;
extracting normal vectors of the target plane information to obtain a first vector, and extracting direction vectors of the human body point cloud envelope box information in the long side direction to obtain a second vector;
acquiring measurement information of an included angle between the first vector and the second vector;
comparing the included angle measurement information with a preset measurement information threshold value;
and if the included angle measurement information is larger than the preset measurement information threshold value, determining that the falling behavior exists in the space to be detected.
8. A method of fall detection according to claim 1, wherein, in the event that it is determined that the space to be detected has the fall behavior, sending fall information corresponding to the fall behavior to a user terminal comprises:
Under the condition that the falling behavior exists in the space to be detected, sending an inquiry command to a preset voice interaction terminal, so that the voice interaction terminal inquires a falling user according to the inquiry command;
and in a preset period, if a falling confirmation instruction fed back by the voice interaction terminal is received or any instruction fed back by the voice interaction terminal is not received, sending falling information corresponding to the falling behaviors to the user terminal.
9. A fall detection device, the device comprising:
the acquisition module is used for acquiring the full-space point cloud of the space to be detected;
the vector extraction module is used for carrying out feature extraction processing on the full-space point cloud and extracting curvature feature vectors corresponding to the full-space point cloud;
the segmentation module is used for carrying out segmentation processing on the full-space point cloud based on the curvature characteristic vector to obtain a human body point cloud;
the feature extraction module is used for extracting the human body point cloud and extracting human body feature information corresponding to the human body point cloud;
the judging module is used for judging whether the space to be detected has falling behaviors according to the human body characteristic information;
And the early warning module is used for sending the falling information corresponding to the falling behaviors to the user terminal under the condition that the falling behaviors exist in the space to be detected.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the fall detection method as claimed in any one of claims 1 to 8 when the program is executed.
11. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor performs the steps of a fall detection method as claimed in any of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of a fall detection method as claimed in any one of claims 1 to 8.
CN202210886760.1A 2022-07-26 2022-07-26 Fall detection method and device, electronic equipment and storage medium Pending CN117523652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210886760.1A CN117523652A (en) 2022-07-26 2022-07-26 Fall detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210886760.1A CN117523652A (en) 2022-07-26 2022-07-26 Fall detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117523652A true CN117523652A (en) 2024-02-06

Family

ID=89746182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210886760.1A Pending CN117523652A (en) 2022-07-26 2022-07-26 Fall detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117523652A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118072472A (en) * 2024-04-18 2024-05-24 深圳市人人壮科技有限公司 Early warning method, device, equipment and storage medium for fall detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118072472A (en) * 2024-04-18 2024-05-24 深圳市人人壮科技有限公司 Early warning method, device, equipment and storage medium for fall detection

Similar Documents

Publication Publication Date Title
US10198823B1 (en) Segmentation of object image data from background image data
JP6288221B2 (en) Enhanced layer-based object detection by deep convolutional neural networks
US11393212B2 (en) System for tracking and visualizing objects and a method therefor
Işık et al. SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos
Charfi et al. Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification
JP6091560B2 (en) Image analysis method
Charfi et al. Definition and performance evaluation of a robust SVM based fall detection solution
US20170213080A1 (en) Methods and systems for automatically and accurately detecting human bodies in videos and/or images
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
EP2131328A2 (en) Method for automatic detection and tracking of multiple objects
US20020051578A1 (en) Method and apparatus for object recognition
CN114022830A (en) Target determination method and target determination device
JP2008542922A (en) Human detection and tracking for security applications
CN111225611A (en) System and method for assisting in analyzing a wound in a target object
US11631306B2 (en) Methods and system for monitoring an environment
JP2018026122A (en) Information processing device, information processing method, and program
Vieira et al. Spatial density patterns for efficient change detection in 3d environment for autonomous surveillance robots
Xiao et al. Building segmentation and modeling from airborne LiDAR data
Chamveha et al. Head direction estimation from low resolution images with scene adaptation
Nayagam et al. A survey on real time object detection and tracking algorithms
CN117523652A (en) Fall detection method and device, electronic equipment and storage medium
Tse et al. DeepClass: Edge based class occupancy detection aided by deep learning and image cropping
Ryan et al. Scene invariant crowd counting
Wang et al. A novel multi-cue integration system for efficient human fall detection
CN107578036A (en) A kind of depth image tumble recognizer based on wavelet moment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication