CN111028912A - Environment-assisted life monitoring method and system - Google Patents
Environment-assisted life monitoring method and system Download PDFInfo
- Publication number
- CN111028912A CN111028912A CN201911338192.6A CN201911338192A CN111028912A CN 111028912 A CN111028912 A CN 111028912A CN 201911338192 A CN201911338192 A CN 201911338192A CN 111028912 A CN111028912 A CN 111028912A
- Authority
- CN
- China
- Prior art keywords
- data
- point cloud
- dimensional
- processing module
- dimensional point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012544 monitoring process Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 83
- 239000000284 extract Substances 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 94
- 230000007613 environmental effect Effects 0.000 claims description 12
- 230000000007 visual effect Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000001035 drying Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 description 28
- 238000013507 mapping Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000001503 joint Anatomy 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 206010033799 Paralysis Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 210000004394 hip joint Anatomy 0.000 description 2
- 208000016285 Movement disease Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000001624 hip Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to a monitoring method and a system for environment-assisted life, wherein the method comprises the following steps: the method comprises the steps that a data acquisition module acquires first three-dimensional point cloud data of a scene; the data acquisition module sends the first three-dimensional point cloud data to the data processing module; the data processing module extracts a human body target from the first three-dimensional point cloud data to generate second three-dimensional point cloud data; the data processing module calculates and acquires three-dimensional skeleton joint data of the human body target according to the second three-dimensional point cloud data; the data processing module calculates the projection angle value of the three-dimensional skeleton joint data on a three-dimensional coordinate plane; the data processing module sends the projection angle value to the data output module; and the data output module draws and outputs a projection angle curve in real time according to the projection angle value.
Description
Technical Field
The invention relates to the field of three-dimensional data processing and automation control, in particular to a monitoring method and a monitoring system for environment-assisted life.
Background
With the coming of the aging era, the medical care problem of the old people is increasingly prominent. At present, in the aspect of rehabilitation training of old people, the old people are weak, wearable rehabilitation monitoring equipment is too heavy and inconvenient to wear, so that one-to-one guidance training is still mainly adopted, the training effect depends on the level of a rehabilitation teacher, the experience requirement on the rehabilitation teacher is high, the evaluation of the rehabilitation state has great subjective property, the rehabilitation data cannot be quantized, continuous rehabilitation detection and evaluation are carried out through people, manpower is consumed, and high rehabilitation cost is brought to patients.
Disclosure of Invention
The invention aims to provide a monitoring method and a monitoring system for environment-assisted living, which aim to overcome the defects Of the prior art, and realize quantitative and visual real-Time monitoring on the rehabilitation motion posture Of a rehabilitation training patient by acquiring three-dimensional point cloud data Of a scene through a Time Of Flight (TOF) sensor and processing the acquired data.
To achieve the above object, in one aspect, the present invention provides an environmental assisted living monitoring method, including:
the method comprises the steps that a data acquisition module acquires first three-dimensional point cloud data of a scene;
the data acquisition module sends the first three-dimensional point cloud data to a data processing module;
the data processing module extracts a human body target from the first three-dimensional point cloud data to generate second three-dimensional point cloud data;
the data processing module calculates and acquires three-dimensional skeleton joint data of the human body target according to the second three-dimensional point cloud data;
the data processing module calculates the projection angle value of the three-dimensional skeleton joint data on a three-dimensional coordinate plane;
the data processing module sends the projection angle value to a data output module;
and the data output module draws an output projection angle curve in real time according to the projection angle value.
Further, the data acquisition module is a time of flight TOF sensor.
Further, the first three-dimensional point cloud data is used for extracting a human body target to generate second three-dimensional point cloud data, and the specific steps are as follows:
the data processing module firstly carries out drying processing on the first three-dimensional point cloud data to generate first three-dimensional point cloud depth data;
the data processing module performs foreground extraction on the first three-dimensional point cloud depth data to generate second three-dimensional point cloud depth data;
the data processing module extracts a human body target from the second three-dimensional point cloud depth data to generate third point cloud depth data;
the data processing module recovers the third point cloud depth data to generate second three-dimensional point cloud data
Further, the calculating and obtaining the three-dimensional skeleton joint data of the human body target according to the second three-dimensional point cloud data specifically comprises:
the data processing module calculates and acquires three-dimensional point cloud grid data of the human body target according to the second three-dimensional point cloud data;
the data processing module calculates and acquires a geodesic distance according to the three-dimensional point cloud grid data;
the data processing module extracts the back Reeb graph data of the human body target according to the geodesic distance;
and the data processing module extracts the three-dimensional skeleton joint data of the human body target according to the Reeb graph data.
Further, the monitoring method further includes:
the data processing module sends the three-dimensional skeleton joint data to the data output module;
and the data output module generates a visual three-dimensional skeleton map in real time according to the three-dimensional skeleton joint data.
Further, the monitoring method further includes:
the data processing module extracts and stores the extreme value of the projection angle curve in a list mode;
and the data processing module sends the extremum stored in the list mode to the data output module for visual output.
Further, before the data acquisition module acquires the first three-dimensional point cloud data of the scene, the method further includes:
the training mode selection module sends a data acquisition instruction to the data acquisition module;
the training mode is divided into: upper body training, lower body training and whole body training.
Further, the method further comprises:
the data processing module carries out segmented extraction of three-dimensional skeleton joint data according to the selected training mode:
when the training mode is upper body training, extracting upper body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
when the training mode is lower body training, extracting lower body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
and when the training mode is whole-body training, extracting whole-body three-dimensional skeleton joint data of the three-dimensional skeleton joint data.
On the other hand, the invention provides an environment-assisted life monitoring system which comprises a data acquisition module, a data processing module, a data output module and a training mode selection module.
According to the monitoring method and the monitoring system for environment-assisted living, provided by the embodiment of the invention, three-dimensional point cloud data of a scene is acquired by using a system integrated with a TOF sensor, the data of three-dimensional skeleton joints of a human body, a projection angle value curve of the three-dimensional skeleton joints and an extreme value of the projection angle value curve are acquired through data processing, and the training posture of a rehabilitation training patient during rehabilitation training is quantized through analysis of the projection angle value curve and the extreme value, so that the rehabilitation training patient can monitor the rehabilitation training of the patient in real time and can adjust the rehabilitation training difficulty by himself/herself without depending on an experienced rehabilitation teacher. And a three-dimensional framework segmentation processing mode is carried out according to a rehabilitation training mode, so that the workload is reduced by half when the projection plane angle of the three-dimensional framework joint data is calculated, and the working efficiency of the system is greatly improved. The TOF sensor is not influenced by external surface characteristics of an object, foreground camouflage, shadow and occlusion in the data acquisition process. The monitoring system for environment-assisted life is not limited to be used in rehabilitation training of old people, and can be installed and used for rehabilitation training of paralyzed patients.
Drawings
Fig. 1 is a flowchart of a method for monitoring environmental assisted living according to an embodiment of the present invention;
fig. 2 is a schematic view of an environment-assisted living monitoring system architecture according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to better understand the monitoring method of environmental assisted living according to the embodiment of the present invention, a monitoring system of environmental assisted living for implementing the above method is first described below. Fig. 2 is a schematic view of an architecture of a monitoring system for environmental assisted living provided by an embodiment of the present invention, as shown in the figure, including: the device comprises a data acquisition module 1, a data processing module 2, a data output module 3 and a training mode selection module 4.
When carrying out the rehabilitation monitoring to the rehabilitation training patient, data acquisition module 1 sets up and is close to the rehabilitation patient, and is preferred, and data acquisition module 1 preferably adopts time of flight TOF sensor, and consequently data acquisition's field of view scope is big, for the convenience carries out accurate gesture recognition to the rehabilitation training patient, and is further preferred, places data acquisition module 1 in the positive 3 meters department from the rehabilitation training patient, is carrying out experimental in-process, and the topological performance of the three-dimensional point cloud data that data acquisition module 1 gathered is best.
The data processing module 2 has a separate operating system and microprocessor, and in a preferred embodiment, the data acquisition module 1 and the data processing module 2 are integrated on the same circuit board.
The data output module 3 is a display screen with a display function, the display screen is a touch type or a non-touch type, when the display screen is the touch screen, the training mode selection module 4 and the data output module share the display screen, and the training mode is selected through visual selection of the training mode in the display screen. The data output module 3 is connected with the data processing module 2 in a wired or wireless mode, and the data output module 3 is only used for outputting data or selecting a training mode, so that the installation position is not limited at all, and the data output module can be placed in a place which is convenient for a user to use or monitor.
When the display screen for data output is a non-touch type, the display screen is only used for data output display, the training mode selection module and the data acquisition module 1 can be integrated on the same circuit board, and the training mode is selected in a remote control mode.
The embodiment of the invention provides a monitoring method for environment-assisted life, which is applied to the monitoring system for environment-assisted life and used for carrying out real-time intelligent monitoring and data analysis on rehabilitation training of elderly or rehabilitation training patients in other age groups. The flow chart of the method is shown in figure 1, and the method comprises the following steps:
step S110, a data acquisition module acquires first three-dimensional point cloud data of a scene.
Preferably, the data acquisition module adopts a time of flight TOF sensor to acquire the scene.
The TOF sensor transmits a light signal through a built-in laser emission module, and acquires depth information and intensity information of a target scene through a built-in Complementary Metal-Oxide-Semiconductor (CMOS) photosensitive element, thereby completing three-dimensional reconstruction of the target scene. In embodiments of the invention, the TOF sensor only needs to acquire depth information of the scene. Because the TOF sensor obtains the distance depth of field map of the three-dimensional scene through the CMOS pixel array and the active modulation light source technology without a single-point-by-point mode, the imaging speed can reach hundreds of frames per second, and meanwhile, the TOF sensor is compact in structure and can realize lower energy consumption.
The data acquisition principle of the TOF sensor is that the distance traveled by the light waves is calculated according to the phase difference generated by reflecting the light emitted by the sensor light source generator module back to the detected object, so that when the data acquired by the TOF sensor is compared with the traditional image acquisition equipment to carry out image three-dimensional reconstruction, the image segmentation is not influenced by shielding and overlapping. Therefore, in the process of rehabilitation monitoring, the data acquisition and the form reconstruction of the training form are not influenced by shielding, and the action form can be perfectly recovered.
Specifically, before the data acquisition module acquires the first three-dimensional point cloud data of the scene, the rehabilitation monitor can select data monitoring according to a training mode of rehabilitation training, preferably, the training mode is divided into: upper body training, lower body training and whole body training.
Since the paralyzed patients often suffer from lower limb movement disorder, when the joint data is extracted, the selection of the training mode can reduce the calculation amount by half when only the upper body or the lower body is trained.
And when the data acquisition module receives a data acquisition instruction sent by the training selection module, the data acquisition module starts to acquire image information of the scene, and the data is stored in a three-dimensional data form. In this embodiment, the data acquisition module is placed at the relative position of the rehabilitation patient, so in the three-dimensional data structure, two dimensions are the plane position data of the projection plane in the front-back direction of the patient, and one dimension is the distance between the acquired scene pixel point and the light wave receiving array in the TOF sensor, that is, the depth information of the scene.
And step S120, the data acquisition module sends the first three-dimensional point cloud data to a data processing module.
Specifically, the data acquisition module sends the stored first three-dimensional point cloud data to the data processing module in a three-dimensional vector mode through the circuit board, wherein each three-dimensional vector represents three-dimensional position information of one pixel.
Step S130, the data processing module extracts a human body target from the first three-dimensional point cloud data to generate second three-dimensional point cloud data. The method comprises the following specific steps:
step S131, the data processing module firstly carries out drying processing on the first three-dimensional point cloud data to generate first three-dimensional point cloud depth data.
Specifically, the data processing module extracts the depth data in the first three-dimensional point cloud data, generates the fourth point cloud depth data and establishes a mapping relation with the three-dimensional vector, so that each depth data can find the three-dimensional vector according to the mapping relation,
and the data processing module generates a two-dimensional point cloud matrix from the fourth cloud depth data. The first three-dimensional point cloud data acquired by the TOF sensor is stored in the TOF sensor in an array form, and the TOF sensor pixel array is the resolution of the sensor, so that the number of columns and the number of rows of a two-dimensional point cloud matrix generated by the fourth cloud depth data are designed to be consistent with the resolution of the TOF sensor. In a specific example, when the resolution of the sensor in the TOF camera is 32 × 48, that is, the number of horizontal pixels of the TOF camera sensor is 32, and the number of vertical pixels of the TOF camera sensor is 48, the number of rows in the two-dimensional point cloud matrix is 48, and the number of columns in the two-dimensional point cloud matrix is 32. Preferably, the element position arrangement in the two-dimensional point cloud matrix is kept consistent with the storage position of the TOF camera sensor array, so that each adjacent element in the two-dimensional point cloud matrix is also adjacent in the actual scene.
Extracting all 3 multiplied by 3 sub-matrices of the two-dimensional point cloud matrix. The number K of the 3 multiplied by 3 submatrices which can be extracted from the two-dimensional point cloud matrix and are not repeated most is the number of the internal elements surrounded by the first row, the last row, the first column and the last column of the two-dimensional point cloud matrix. In a specific example, when the two-dimensional point cloud matrix is 32 × 48, there are 1536 elements in total, 156 elements of the first row, the last row, the first column and the last column of the matrix are removed, that is, K is 1380, so that 1380 3 × 3 sub-matrices can be extracted, and when K is the maximum value, the drying judgment of each point can be guaranteed to the maximum extent.
And establishing a position index of the central element of the 3 multiplied by 3 sub-matrix in the two-dimensional point cloud matrix. Marking a row mark and a column mark of the central element of each 3 multiplied by 3 sub-matrix in the two-dimensional point cloud matrix, and matching corresponding depth data in the two-dimensional point cloud matrix according to the row mark and the column mark. Because the drying judgment is carried out on the central element of the 3 multiplied by 3 sub-matrix, only the position of the central element needs to be marked, thereby greatly reducing the calculation amount of the system.
Comparing a first result of adding the absolute values of the differences of the central elements and the other elements respectively within the 3 × 3 sub-matrix with a first threshold:
if the first result is smaller than the first threshold value, retaining an element corresponding to the position of the central element in the two-dimensional point cloud matrix;
if the first result is not less than the first threshold, extracting a 2 × 2 sub-matrix of the 3 × 3 sub-matrix;
comparing the absolute value of the difference between the element in each 2 x 2 sub-matrix and the central element, and extracting a first minimum value and comparing the first minimum value with a second threshold value;
if the first minimum value is not smaller than a second threshold value, the central element is judged to be a noise point, the position of the noise point in the two-dimensional point cloud matrix is found according to the position index, and the element corresponding to the noise point is discarded;
if the first minimum value is smaller than a second threshold value, retaining an element corresponding to the position of the central element in the two-dimensional point cloud matrix;
the first threshold and the second threshold can be set to appropriate values by selecting different thresholds for multiple times according to a standard detected scene, and preferably, the second threshold is not more than half of the first threshold. In a specific embodiment, when the two-dimensional point cloud matrix is 240 × 320, the first threshold value is preferably 0.2. When the first result is not less than the first threshold, the 2 × 2 sub-matrix of the 3 × 3 sub-matrix is extracted to perform secondary noise point judgment, so that the false deletion rate of the noise points can be effectively reduced.
And when the first result is smaller than the first threshold value and the first minimum value is smaller than the second threshold value, fourth cloud depth data corresponding to elements reserved in the two-dimensional point cloud matrix are generated into the first three-dimensional point cloud depth data.
Step S132, the data processing module performs foreground extraction on the first three-dimensional point cloud depth data to generate second three-dimensional point cloud depth data.
Specifically, the data processing module calculates a normal vector of the first three-dimensional point cloud depth data, has different normal vector characteristics for the point cloud depth data with the foreground and the background mixed together, removes the background with the same normal vector characteristics, retains the foreground, and generates second three-dimensional point cloud depth data.
Step S133, the data processing module extracts the human body target from the second three-dimensional point cloud depth data to generate third point cloud depth data.
Specifically, the data processing module carries out edge extraction of the human body outline on the second three-dimensional point cloud depth data to generate third point cloud depth data. In a preferred embodiment, the data processing module adopts a bayesian segmentation method to extract the human body target from the second three-dimensional point cloud depth data.
And step S134, the data processing module recovers the third point cloud depth data to generate second three-dimensional point cloud data.
Specifically, the first three-dimensional point cloud data is stored in a three-dimensional vector mode, the data processing module establishes a mapping relation between depth information and the three-dimensional point cloud data when extracting the first three-dimensional point cloud depth data, each depth data can find the three-dimensional vector to which the depth data belongs according to the mapping relation, and the third three-dimensional point cloud depth data belongs to a subset of the first three-dimensional point depth data, so that the data processing module can restore the three-dimensional vector according to the third point cloud depth data and the mapping relation to generate second three-dimensional point cloud data, and the generated second three-dimensional point cloud data is the remaining human body target data in the first three-dimensional point cloud data.
And step S140, calculating and acquiring three-dimensional skeleton structure data of the human body target by the data processing module according to the second three-dimensional point cloud data. The method comprises the following specific steps:
and step S141, the data processing module calculates and acquires three-dimensional point cloud grid data of the human body target according to the second three-dimensional point cloud data.
The three-dimensional grid has a plurality of division modes, such as triangle, quadrangle and polyhedron, preferably, the data processing module calculates the triangle grid of the second three-dimensional point cloud data to generate triangular three-dimensional point cloud grid data, and the three-dimensional point cloud grid data comprises the coordinate point and triangle relation of the second three-dimensional point cloud data.
And step S142, calculating and acquiring a geodesic distance by the data processing module according to the three-dimensional point cloud grid data.
Specifically, the data processing module selects a reference point of the three-dimensional point cloud grid data, and then calculates the geodesic distance from other points to the reference point. Preferably, in the human body three-dimensional point cloud grid data, the vertex coordinates of the human body target are selected as reference points, and the geodesic distance is calculated by utilizing a dickstra algorithm.
And S143, the data processing module extracts the back Reeb graph data of the human body target according to the geodesic distance.
Specifically, the data processing module constructs a Morse function according to the geodesic distance, and extracts Reeb diagram data of the human body target according to the constructed Morse function. The extracted Reeb graph data is composed of a series of arcs, the arcs comprise a starting point, an end point and a series of nodes on the arcs, the nodes have three-dimensional coordinates and weights, and the weights are the number of points generated by the number of original input triangular three-dimensional point cloud grid data.
And step S144, the data processing module extracts the three-dimensional skeleton joint data of the human body target according to the Reeb graph data.
Specifically, all key nodes of the Reeb graph are extracted, and preferably, the nodes with the node degree not being 2 are determined as the key nodes, and the first joint data is generated, wherein the node degree refers to the number of other nodes connected with the node. And calculating the curvature of the points on the Reeb graph arc, and when the curvature is larger than a third threshold value, considering the nodes as joint points, and generating second joint data, wherein the preferred third threshold value is-0.9. And combining the first joint data and the second joint data to generate three-dimensional skeleton joint data of the human body target.
And S145, the data processing module sends the three-dimensional skeleton joint data to the data output module.
Specifically, the data output module receives three-dimensional skeleton joint data acquired by each frame in real time.
And step S146, the data output module generates a visualized three-dimensional skeleton map in real time according to the three-dimensional skeleton joint data.
Specifically, the data output module carries out the joint connection according to discrete three-dimensional skeleton joint data and generates visual three-dimensional skeleton picture, and the rehabilitation training patient or the rehabilitation teacher can know the rehabilitation training gesture on line in real time to the self can be timely correct the gesture or the rehabilitation teacher carries out timely correction to the action.
And S150, calculating a projection angle value of the three-dimensional skeleton joint data on a three-dimensional coordinate plane by the data processing module.
Specifically, before calculating the projection angle value of the three-dimensional skeleton joint data on the three-dimensional coordinate plane, the method further comprises:
the data processing module carries out segmented extraction of three-dimensional skeleton joint data according to the selected training mode:
when the training mode is upper body training, extracting upper body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
when the training mode is lower body training, extracting lower body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
and when the training mode is whole-body training, extracting whole-body three-dimensional skeleton joint data of the three-dimensional skeleton joint data.
And if the extracted data is the upper body three-dimensional skeleton joint data, calculating the projection angle value of the upper body three-dimensional skeleton joint data on the three-dimensional coordinate plane.
And if the extracted data is the lower body three-dimensional skeleton joint data, calculating the projection angle value of the lower body three-dimensional skeleton joint data on the three-dimensional coordinate plane.
And if the extracted data is the whole body three-dimensional skeleton joint data, calculating the projection angle value of the whole body three-dimensional skeleton joint data on the three-dimensional coordinate plane.
By extracting the three-dimensional skeleton joint data in a segmented manner, the calculated amount of a processor can be reduced by half, and the efficiency of the system is effectively improved.
Preferably, when the upper body skeleton joint data and the lower body skeleton joint data are distinguished, all the three-dimensional skeleton joint data are marked, specifically, joints above the marked hip joints are divided into upper body skeleton joint data, and the marked hip and hip joint data are divided into lower body skeleton joint data.
Because three-dimensional coordinate planes exist, each three-dimensional skeleton joint data has three projection angle values, and in a specific real-time example, only a projection angle curve with a large projection angle fluctuation range and an extreme value are output and compared when projection angle data are observed.
Furthermore, the rehabilitation training posture of the rehabilitation training patient can be restored through the projection angles of the same three-dimensional skeleton joint data on the three projection planes. In a specific embodiment, when the patient holds the barbell with both hands forward, the projection angle on the plane parallel to the front and back of the body is kept constant, and the projection angle on the plane dividing the left and right parts of the body and the projection angle on the plane dividing the up and down parts of the body are increased and decreased periodically.
Step S160, the data processing module sends the projection angle value to a data output module.
Specifically, the motion of the rehabilitation training patient has a periodic characteristic during rehabilitation training, and therefore, the projected angle value changes periodically during projection of the three-dimensional skeleton joint data.
And S170, the data output module draws an output projection angle curve in real time according to the projection angle value.
Specifically, smooth curve fitting is performed on each frame of projection angle value received by the data output module to form a projection angle curve, and visual output is performed.
Further, the data processing module extracts and stores an extreme value of the projection angle curve in a list manner; and the data processing module sends the extremum stored in the list mode to the data output module for visual output.
Because the projection angle changes periodically, different extreme values exist on the period of the curve, in order to facilitate the analysis of a rehabilitation training patient or a rehabilitation training teacher and the guidance of the rehabilitation training pertinence, the data processing module extracts and records the extreme values in each period formed by the projection angle value, and after the preset recording time period length is reached, the data processing module sends all the recorded extreme values to the data output module in a list mode for visual output. Preferably, the list includes names of rehabilitation training poses, values of extrema, and corresponding recorded times.
The degree of progress of the rehabilitation training can be quantified by the patient or the rehabilitee through the comparison of the extreme values in the list. The slight progress of the patient in the rehabilitation training can be known through the comparison of the extreme values of the rehabilitation training which is maintained for a specified time. By comparing the extreme values of a plurality of times of rehabilitation training, the progress condition of the patient in rehabilitation training can be known for a period of time. Through the observation of the projection angle curve, the change process of the whole motion posture of the rehabilitation training patient can be known. According to the quantification and visual comparison and observation, the rehabilitation training patient can adjust the difficulty of the rehabilitation training by himself without the professional suggestion of a rehabilitation teacher, so that the labor cost of the rehabilitation training is greatly reduced.
The above is a process completely realized by the monitoring method for environment-assisted life provided by the embodiment of the invention.
According to the monitoring method and the monitoring system for the environment-assisted living, provided by the embodiment of the invention, the TOF sensor is used for collecting the three-dimensional point cloud data of a scene, the three-dimensional point cloud data is subjected to drying removal, foreground extraction and human body target extraction, human body three-dimensional skeleton joint data is obtained, three-dimensional plane projection is carried out on the three-dimensional skeleton joint data to obtain a projection angle value, and the training posture of a rehabilitation training patient during rehabilitation training is quantized through analysis of a projection angle value curve and an extreme value, so that the rehabilitation training patient can monitor the rehabilitation training of the rehabilitation training patient in real time and can adjust the rehabilitation training difficulty automatically without an experienced rehabilitation trainer. And a three-dimensional framework segmentation processing mode is carried out according to a rehabilitation training mode, so that the workload is reduced by half when the projection plane angle of the three-dimensional framework joint data is calculated, and the working efficiency of the system is greatly improved.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. An environmental assisted living monitoring method, comprising:
the method comprises the steps that a data acquisition module acquires first three-dimensional point cloud data of a scene;
the data acquisition module sends the first three-dimensional point cloud data to a data processing module;
the data processing module extracts a human body target from the first three-dimensional point cloud data to generate second three-dimensional point cloud data;
the data processing module calculates and acquires three-dimensional skeleton joint data of the human body target according to the second three-dimensional point cloud data;
the data processing module calculates the projection angle value of the three-dimensional skeleton joint data on a three-dimensional coordinate plane;
the data processing module sends the projection angle value to a data output module;
and the data output module draws an output projection angle curve in real time according to the projection angle value.
2. The environmentally assisted lifestyle monitoring method of claim 1 wherein the data acquisition module is a time of flight (TOF) sensor.
3. The environmental assisted living monitoring method according to claim 1, wherein the first three-dimensional point cloud data is used for extracting a human body target to generate second three-dimensional point cloud data, and the specific steps are as follows:
the data processing module firstly carries out drying processing on the first three-dimensional point cloud data to generate first three-dimensional point cloud depth data;
the data processing module performs foreground extraction on the first three-dimensional point cloud depth data to generate second three-dimensional point cloud depth data;
the data processing module extracts a human body target from the second three-dimensional point cloud depth data to generate third point cloud depth data;
and the data processing module recovers the third point cloud depth data to generate second three-dimensional point cloud data.
4. The environmental assisted living monitoring method according to claim 1, wherein the calculating and obtaining of the three-dimensional skeleton joint data of the human body target according to the second three-dimensional point cloud data specifically comprises:
the data processing module calculates and acquires three-dimensional point cloud grid data of the human body target according to the second three-dimensional point cloud data;
the data processing module calculates and acquires a geodesic distance according to the three-dimensional point cloud grid data;
the data processing module extracts the back Reeb graph data of the human body target according to the geodesic distance;
and the data processing module extracts the three-dimensional skeleton joint data of the human body target according to the Reeb graph data.
5. The environmental assisted living monitoring method of claim 1, further comprising:
the data processing module sends the three-dimensional skeleton joint data to the data output module;
and the data output module generates a visual three-dimensional skeleton map in real time according to the three-dimensional skeleton joint data.
6. The environmental assisted living monitoring method of claim 1, further comprising:
the data processing module extracts and stores the extreme value of the projection angle curve in a list mode;
and the data processing module sends the extremum stored in the list mode to the data output module for visual output.
7. The method for environmental assisted living monitoring according to claim 1, wherein before the data acquisition module acquires the first three-dimensional point cloud data of the scene, the method further comprises:
the training mode selection module sends a data acquisition instruction to the data acquisition module;
the training mode is divided into: upper body training, lower body training and whole body training.
8. The method for environmentally assisted living monitoring of claim 7, further comprising:
the data processing module carries out segmented extraction of three-dimensional skeleton joint data according to the selected training mode:
when the training mode is upper body training, extracting upper body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
when the training mode is lower body training, extracting lower body three-dimensional skeleton joint data of the three-dimensional skeleton joint data;
and when the training mode is whole-body training, extracting whole-body three-dimensional skeleton joint data of the three-dimensional skeleton joint data.
9. An environmental assisted living monitoring system, characterized in that the system comprises a data acquisition module, a data processing module and a data output module according to any one of claims 1-6.
10. The environmentally assisted lifestyle monitoring system of claim 9 further comprising a training mode selection module of claims 7 and 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338192.6A CN111028912A (en) | 2019-12-23 | 2019-12-23 | Environment-assisted life monitoring method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338192.6A CN111028912A (en) | 2019-12-23 | 2019-12-23 | Environment-assisted life monitoring method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111028912A true CN111028912A (en) | 2020-04-17 |
Family
ID=70212678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911338192.6A Pending CN111028912A (en) | 2019-12-23 | 2019-12-23 | Environment-assisted life monitoring method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111028912A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030235332A1 (en) * | 2002-06-20 | 2003-12-25 | Moustafa Mohamed Nabil | System and method for pose-angle estimation |
CN102129719A (en) * | 2011-03-17 | 2011-07-20 | 北京航空航天大学 | Virtual human dynamic model-based method for extracting human skeletons |
CN106683070A (en) * | 2015-11-04 | 2017-05-17 | 杭州海康威视数字技术股份有限公司 | Body height measurement method and body height measurement device based on depth camera |
US20190156507A1 (en) * | 2016-10-10 | 2019-05-23 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for processing point cloud data and storage medium |
CN109920208A (en) * | 2019-01-31 | 2019-06-21 | 深圳绿米联创科技有限公司 | Tumble prediction technique, device, electronic equipment and system |
CN110544302A (en) * | 2019-09-06 | 2019-12-06 | 广东工业大学 | Human body action reconstruction system and method based on multi-view vision and action training system |
-
2019
- 2019-12-23 CN CN201911338192.6A patent/CN111028912A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030235332A1 (en) * | 2002-06-20 | 2003-12-25 | Moustafa Mohamed Nabil | System and method for pose-angle estimation |
CN102129719A (en) * | 2011-03-17 | 2011-07-20 | 北京航空航天大学 | Virtual human dynamic model-based method for extracting human skeletons |
CN106683070A (en) * | 2015-11-04 | 2017-05-17 | 杭州海康威视数字技术股份有限公司 | Body height measurement method and body height measurement device based on depth camera |
US20190156507A1 (en) * | 2016-10-10 | 2019-05-23 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for processing point cloud data and storage medium |
CN109920208A (en) * | 2019-01-31 | 2019-06-21 | 深圳绿米联创科技有限公司 | Tumble prediction technique, device, electronic equipment and system |
CN110544302A (en) * | 2019-09-06 | 2019-12-06 | 广东工业大学 | Human body action reconstruction system and method based on multi-view vision and action training system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6942488B2 (en) | Image processing equipment, image processing system, image processing method, and program | |
Zheng et al. | Hybridfusion: Real-time performance capture using a single depth sensor and sparse imus | |
CN109432753B (en) | Action correcting method, device, storage medium and electronic equipment | |
Tripathi et al. | 3D human pose estimation via intuitive physics | |
US10830584B2 (en) | Body posture tracking | |
US7899220B2 (en) | Time-dependent three-dimensional musculo-skeletal modeling based on dynamic surface measurements of bodies | |
CN110599540A (en) | Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera | |
CN104240288A (en) | Methods and systems for generating a three-dimensional representation of a subject | |
KR101118654B1 (en) | rehabilitation device using motion analysis based on motion capture and method thereof | |
CN103999126A (en) | Method and apparatus for estimating pose | |
JP2022536354A (en) | Behavior prediction method and device, gait recognition method and device, electronic device, and computer-readable storage medium | |
CN102855470A (en) | Estimation method of human posture based on depth image | |
CN113658211B (en) | User gesture evaluation method and device and processing equipment | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
Reyes et al. | Automatic digital biometry analysis based on depth maps | |
CN106203361A (en) | A kind of robotic tracking's method and apparatus | |
CN112184898A (en) | Digital human body modeling method based on motion recognition | |
Chen et al. | Camera networks for healthcare, teleimmersion, and surveillance | |
CN114119857A (en) | Processing method, system and storage medium for synchronizing position and limb of character avatar | |
CN113283373A (en) | Method for enhancing detection of limb motion parameters by depth camera | |
CN113610969B (en) | Three-dimensional human body model generation method and device, electronic equipment and storage medium | |
Zhuang | Film and television industry cloud exhibition design based on 3D imaging and virtual reality | |
US11250592B2 (en) | Information processing apparatus | |
CN111028912A (en) | Environment-assisted life monitoring method and system | |
KR20220058790A (en) | Exercise posture analysis method using dual thermal imaging camera, guide method for posture correction and computer program implementing the same method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200417 |