CN113240964B - Cardiopulmonary resuscitation teaching machine - Google Patents
Cardiopulmonary resuscitation teaching machine Download PDFInfo
- Publication number
- CN113240964B CN113240964B CN202110522407.0A CN202110522407A CN113240964B CN 113240964 B CN113240964 B CN 113240964B CN 202110522407 A CN202110522407 A CN 202110522407A CN 113240964 B CN113240964 B CN 113240964B
- Authority
- CN
- China
- Prior art keywords
- image
- gesture
- trainer
- control device
- pressing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002680 cardiopulmonary resuscitation Methods 0.000 title claims abstract description 21
- 238000004088 simulation Methods 0.000 claims abstract description 53
- 238000003825 pressing Methods 0.000 claims abstract description 47
- 230000009471 action Effects 0.000 claims abstract description 30
- 230000000875 corresponding effect Effects 0.000 claims abstract description 13
- 238000005457 optimization Methods 0.000 claims description 51
- 238000012545 processing Methods 0.000 claims description 47
- 238000001514 detection method Methods 0.000 claims description 27
- 229910000831 Steel Inorganic materials 0.000 claims description 26
- 239000010959 steel Substances 0.000 claims description 26
- 238000004891 communication Methods 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 230000009467 reduction Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 238000002360 preparation method Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 abstract description 3
- 238000012549 training Methods 0.000 description 12
- 230000033001 locomotion Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/02—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Abstract
The invention provides a cardiopulmonary resuscitation teaching machine, which comprises a trunk simulation device, a master control device, a display and a camera, wherein the trunk simulation device is connected with the master control device; the trunk simulation device comprises a trunk shell, a pressing simulation device and a sensor device; the sensor device, the display and the camera are respectively connected with the main control device; the sensor device is used for acquiring force data and depth data generated when the trainer presses the pressing simulation device; the camera is used for acquiring a gesture image when a trainer presses the pressing simulation device; the main control device is used for analyzing the pressing action of the trainer based on the force data, the depth data or the gesture image and obtaining corresponding action prompt information; the display is used for displaying the action prompt information to the trainer. The invention realizes the automatic correction prompt of the error action of the trainer.
Description
Technical Field
The invention relates to the field of teaching, in particular to a cardiopulmonary resuscitation teaching machine.
Background
The cardiopulmonary resuscitation teaching device developed worldwide at present mainly focuses on hospital professional training devices and some teaching machines with specific functions, and the hospital professional training devices need to have the capability of processing complex matters and generally have high-precision and high-simulation mechanical structures. The teaching machine with specific functions, such as a full-automatic cardiopulmonary resuscitation pressing machine, has a special structure according to different functions.
The existing cardiopulmonary resuscitation teaching device has limited data acquisition and feedback capacity and cannot intuitively feed back various wrong operations of a trainer. For example, some simulated dummy persons have difficulty in visually seeing their training conditions when performing press training, and need a professional teacher to correct their error actions under the guidance of one side. For the cardio-pulmonary resuscitation training machine with high precision, high simulation and profession, the machine is generally used in the professionally-owned places such as hospitals and training institutions due to high price, is difficult to be contacted by the general public and is difficult to popularize and educate the general public.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a cardiopulmonary resuscitation teaching machine.
The invention provides a cardiopulmonary resuscitation teaching machine, which comprises a trunk simulation device, a master control device, a display and a camera, wherein the trunk simulation device comprises a trunk simulation device body and a trunk simulation device body;
the trunk simulation device comprises a trunk shell, a pressing simulation device and a sensor device;
the sensor device, the display and the camera are respectively connected with the main control device;
the sensor device is used for acquiring force data and depth data generated when the trainer presses the pressing simulation device and transmitting the force data and the depth data to the main control device;
the camera is used for acquiring a gesture image when a trainer presses the pressing simulation device and transmitting the gesture image to the main control device;
the main control device is used for analyzing the pressing action of the trainer based on the force data, the depth data or the gesture image and obtaining corresponding action prompt information;
the display is used for displaying the action prompt information to the trainer.
Preferably, the pressing simulation device comprises a spring, a steel plate and a steel base which are arranged inside the trunk shell, and a simulated human skin which covers the outside of the trunk shell;
the steel base is arranged at the bottom of the trunk shell;
one end of the spring is connected with the steel base, and the other end of the spring is connected with the steel plate;
the steel plate is connected with the top of the trunk shell.
Preferably, the sensor device comprises a depth detection sensor and a force detection sensor;
the depth detection sensor is used for acquiring the falling depth of the steel plate when the trainer presses the pressing simulation device;
the force detection sensor is used for acquiring the pressure on the steel plate when the trainer presses the pressing simulation device.
Preferably, the sensor device further comprises an MCU;
the depth detection sensor and the force detection sensor are respectively connected with the MCU;
the depth detection sensor and the force detection sensor are respectively used for transmitting the descending depth and the pressure to the MCU;
and the MCU is used for transmitting the descending depth and the pressure to the main control device through an RS232 communication serial port.
Preferably, the cardiopulmonary resuscitation teaching machine further comprises a microphone, a speaker, a wireless communication device and a cloud server;
the microphone, the loudspeaker and the wireless communication device are respectively connected with the main control device;
the master control device is further used for sending the force data, the depth data and the gesture image to the cloud server through the wireless communication device;
the loudspeaker is used for acquiring the question voice of the trainer and transmitting the question voice to the main control device;
the main control device is also used for transmitting the asked voice to the cloud server and receiving the answer information returned from the cloud server;
the loudspeaker is used for playing the answer information to the trainer.
Preferably, the speaker is further configured to play the action prompt message to the trainer.
Preferably, the microphone, the speaker and the wireless communication device are respectively connected with the main control device through a USB communication interface.
Compared with the prior art, the invention has the advantages that:
the invention can analyze training data such as force data, depth data, gesture images and the like generated when a trainer presses the pressing simulation device, thereby giving corresponding action prompt information to the trainer and realizing automatic correction prompt of the wrong action of the trainer. The present invention also enables automatic answers to the trainee's voice questions. Meanwhile, the training data can be transmitted to the cloud server, so that trainers can conveniently check the training results at any time and any place.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a machine for teaching cardiopulmonary resuscitation according to the present invention.
Fig. 2 is another exemplary embodiment of a cardiopulmonary resuscitation teaching machine of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
In one embodiment, as shown in fig. 1, the invention provides a cardiopulmonary resuscitation teaching machine, comprising a torso simulation device, a master control device, a display and a camera;
the trunk simulation device comprises a trunk shell, a pressing simulation device and a sensor device;
the sensor device, the display and the camera are respectively connected with the main control device;
the sensor device is used for acquiring force data and depth data generated when the trainer presses the pressing simulation device and transmitting the force data and the depth data to the main control device;
the camera is used for acquiring a gesture image when a trainer presses the pressing simulation device and transmitting the gesture image to the main control device;
the main control device is used for analyzing the pressing action of the trainer based on the force data, the depth data or the gesture image and obtaining corresponding action prompt information;
the display is used for displaying the action prompt information to the trainer.
Preferably, the pressing simulation device comprises a spring, a steel plate and a steel base which are arranged inside the trunk shell, and a simulated human skin which covers the outside of the trunk shell;
the steel base is arranged at the bottom of the trunk shell;
one end of the spring is connected with the steel base, and the other end of the spring is connected with the steel plate;
the steel plate is connected with the top of the trunk shell.
Preferably, the sensor device comprises a depth detection sensor and a force detection sensor;
the depth detection sensor is used for acquiring the falling depth of the steel plate when the trainer presses the pressing simulation device;
the force detection sensor is used for acquiring the pressure on the steel plate when the trainer presses the pressing simulation device.
Preferably, the sensor device further comprises an MCU;
the depth detection sensor and the force detection sensor are respectively connected with the MCU;
the depth detection sensor and the force detection sensor are respectively used for transmitting the descending depth and the pressure to the MCU;
and the MCU is used for transmitting the descending depth and the pressure to the main control device through an RS232 communication serial port.
Since the pressing action is cyclic, the depth of descent and the force level each include peaks and valleys.
Preferably, as shown in fig. 2, the cardiopulmonary resuscitation teaching machine further comprises a microphone, a speaker, a wireless communication device and a cloud server;
the microphone, the loudspeaker and the wireless communication device are respectively connected with the master control device;
the master control device is further used for sending the force data, the depth data and the gesture image to the cloud server through the wireless communication device;
the loudspeaker is used for acquiring the question voice of the trainer and transmitting the question voice to the main control device;
the main control device is also used for transmitting the asked voice to the cloud server and receiving the answer information returned from the cloud server;
the loudspeaker is used for playing the answer information to the trainer.
Preferably, the cloud server obtains the answer information by:
analyzing the questioning voice to obtain word segmentation;
matching the word segmentation with a preset question library in a cloud server to obtain answers corresponding to the questions in the questioning voice;
and sending the answer serving as answer information to the main control device.
And after receiving the answer information, the main control device displays the answer information to the trainer by controlling the display and plays the answer information to the trainer by the loudspeaker.
Preferably, the speaker is further configured to play the action prompt message to the trainer.
Preferably, the microphone, the speaker and the wireless communication device are respectively connected with the main control device through a USB communication interface.
Preferably, the cardiopulmonary resuscitation teaching machine further comprises a head simulation device, a neck simulation device, an arm simulation device, a leg simulation device and a traveling device;
the head simulation device is connected with the trunk simulation device through the neck simulation device;
the number of the arm simulation devices is 2, and the 2 arm simulation devices are respectively arranged at the left side and the right side of the trunk simulation device;
the number of the leg simulation devices is 2, and the 2 leg simulation devices are respectively connected with the trunk simulation device;
the traveling device is connected with the leg simulating device.
Preferably, the walking device comprises a universal horse wheel, and the universal horse wheel is connected with the leg simulation device.
Preferably, the speakers are two-channel speakers, and the two-channel speakers are disposed on both sides of the head simulation apparatus.
Preferably, the display comprises a first display and a second display, the first display being disposed on a surface of the head simulator;
the second display is arranged on the surface of the trunk shell;
the first display device and the second display device are respectively connected with the main control device;
the second display is used for displaying the action prompt information to the trainer.
Preferably, the analyzing the pressing motion of the trainer based on the gesture image to obtain the corresponding motion prompt information includes:
acquiring feature information contained in the gesture image;
comparing the characteristic information with the characteristic information of a pre-stored gesture standard image, and judging the type of errors existing in the pressing action of the trainer;
and acquiring prestored corresponding action prompt information according to the error type.
The types of errors include one-handed presses, low two-handed overlap, etc.
Preferably, the acquiring feature information included in the gesture image includes:
performing graying processing on the gesture image to obtain a gesture grayscale image;
performing first optimization processing on the gesture gray level image to obtain a first optimized image;
performing image enhancement processing on the first optimized image to obtain a detail enhanced image;
performing second optimization processing on the detail enhanced image to obtain a second optimized image;
and obtaining the characteristic information contained in the second optimized image by using an LBP characteristic extraction algorithm.
In the prior art, before feature extraction is performed on an image, denoising processing is generally required, and the existing denoising processing process includes one-time denoising and multiple-time denoising, however, an enhancement processing process is not set between two adjacent denoising processes, so that the image is smoother and smoother with the increase of the optimization times, and the detail information is lost more and more. Therefore, the image enhancement processing process is set between the two optimization processing, and after the image enhancement processing, the detail information such as the edge information and the like in the first optimized image can be enhanced, so that the damage of the subsequent optimization process to the detail information can be counteracted to a great extent, the effective removal of noise in the gesture image is realized, and meanwhile, the detail information in the gesture image is effectively retained.
Preferably, the performing a first optimization process on the gesture grayscale image to obtain a first optimized image includes:
performing first optimization processing on the gesture gray level image in a batch optimization processing mode:
storing non-edge pixel points contained in the gesture gray level image into a set totlU;
first batch optimization treatment:
set totLU with gray value atThe pixel points are stored in a first optimization processing set optcu 1 Performing the following steps; gma and gmi respectively represent the maximum value and the minimum value of the gray values in the gesture gray image; cstcf represents a preset constant type parameter;
for optcu 1 Middle pixel point optc 1 Judging whether the gesture gray scale image is a noise point or not through a preset judgment algorithm, if so, performing noise reduction on the gesture gray scale image by using a preset noise reduction algorithm to obtain a first batch of optimized processing images optimg 1 ;
Will optcu 1 Deleting the pixel points contained in the image data from the totlU to obtain a set totlU of unprocessed pixel points 1 ;
Optimization processing of the nth batch, wherein n is more than or equal to 2:
the set of unprocessed pixel points obtained after the (n-1) th batch of optimization processing is recorded as totlu n-1 ;
Will set totlu n-1 Middle gray scale value ofThe pixel points are stored in the nth batch optimization processing set optcu n The preparation method comprises the following steps of (1) performing;
for optcu n Middle pixel point optc n Judging whether the image is a noise point or not through a preset judgment algorithm, if so, using the preset noise reduction algorithm to optimally process the image optimg in the (n-1) th batch n-1 The nth batch of optimized processing images optimg are obtained by carrying out noise reduction processing on the image n ;
Will optcu n The pixel point contained in the slave totlu n-1 Deleting to obtain a set totlu of unprocessed pixel points n ;
Wherein the value of N ∈ [2, N ], N is calculated by:
optimizing the Nth batch of processed images optimg N As a first optimized image.
When the optimization processing is carried out for the first time, the optimization processing is carried out on the non-edge pixel points, so that important detail information loss caused by the optimization processing can be avoided. Specifically, in the optimization process, pixels in the totlU are optimized in batches, and the next batch of optimized pixels are the optimized images optimg obtained by the previous batch of pixels n-1 Is carried out in (1). Because the higher the pixel value of the pixel point is, the higher the possibility of the occurrence of the noise point is, the optimization processing is firstly carried out on the pixel point with the high pixel value to obtain the optimal time n-1 Then, the optimization is established at optimg in the subsequent optimization process n-1 That is, the correct optimization of the pixels with high pixel values can be spread from batch to subsequent optimization, and the setting mode is optimg n-1 The pixel points in (3) can be paired with optc n The optimization processing process of the method generates influence, and retention and diffusion of detail information are achieved. The method and the device can improve the accuracy of accurate optimization processing on the gesture gray level image and avoid the loss of detail information as much as possible. The noise point is generally an extreme point with a relatively large pixel point, but the extreme point is not necessarily a noise point. Therefore, the invention is first to optc n The noise point judgment is carried out on the pixel point in the system, and then the optc is carried out n And carrying out noise reduction treatment.
Preferably, the determining whether the noise point is a noise point by a preset determination algorithm includes:
for optcu 1 Middle pixel point optc 1 Judging whether the noise point is a noise point or not by the following method:
scniopix(optc 1 )=w 1 ×judma(optc 1 )+w 2 ×judnum(optc 1 )+w 3 ×dif(optc 1 )
in the formula, scniopix (optc) 1 ) Represents optc 1 Noise point index of (d), w 1 、w 2 、w 3 Represents a preset weight coefficient, judma (optc) 1 ) Represents the maximum judgment function, if optc 1 The pixel point with the largest pixel value in the window with the size of c × c taking the pixel point as the center is judma (optc) 1 ) Has a value of qs 1 Otherwise judma (optc) 1 ) Has a value of-1; judnum (optc) 1 ) Representing degree judging function, if the window is in the window with optc 1 If the number of pixels with equal pixel values is 0, judnum (optc) 1 ) Has a value of qs 2 Otherwise judnum (optc) 1 ) Has a value of-1; dif (optc) 1 ) Represents optc 1 A difference value from the average value of the pixel points in the window; qs is a product of 1 And qs 2 Respectively representing a preset first constant coefficient and a preset second constant coefficient;
if scniopix (optc) 1 ) Greater than the first optimization judgment threshold thre 1 Then, it represents optc 1 Is a noise point;
for optcu n Middle pixel point optc n Judging whether the noise point is a noise point or not by the following method:
scniopix(optc n )=w 1 ×judma(optc n )+w 2 ×judnum(optc n )+w 3 ×dif(optc n )
in the formula, scniopix (optc) n ) Represents optc n Noise point index of (d), w 1 、w 2 、w 3 Represents a preset weight coefficient, judma (optc) n ) Represents the maximum judgment function, if optc n The pixel point with the largest pixel value in the window with the size of c × c with the pixel point as the center is judma (optc) n ) Has a value of qs 1 Otherwise judma (optc) n ) Has a value of-1; judnum (optc) n ) Indicating a degree judging function if it is within the window and optc 1 If the number of pixels with equal pixel values is 0, judnum (optc) n ) Has a value of qs 2 Otherwise judnum (optc) n ) Has a value of-1; dif (optc) n ) Represents optc 1 A difference value from the average value of the pixel points in the window;
if scniopix (optc) n ) Greater than nth batch optimization processing judgment threshold value thre n Then indicate optc n Is a noise point;
wherein, optc n =optc 1 +n×qs 3 ,qs 3 Representing a preset third constant coefficient.
When the noise point is judged, whether the noise point is the maximum value in the window or not is considered, how many pixels are parallel to the maximum value in the window is also considered, and the difference between the pixel point with the maximum value and the average value of the pixels in the window is also considered. The accuracy of noise point detection is improved.
Preferably, the performing a second optimization process on the detail-enhanced image to obtain a second optimized image includes:
performing one-layer wavelet decomposition on the detail enhanced image to obtain a wavelet high-frequency coefficient and a wavelet low-frequency coefficient;
recording the total number of the wavelet high-frequency coefficients as M;
high frequency coefficient ltbs for mth wavelet m And optimizing the method in the following way, wherein m belongs to [1,M ]]:
If ltbs m |≤ht 1 Then the following formula is used for the ltbs m Carrying out optimization treatment:
if ht 1 <|ltbs m |<ht 2 Then the following formula is used for the ltbs m Carrying out optimization treatment:
if ht 2 ≤|ltbs m If, then the following formula is used for the ltbs m Carrying out optimization treatment:
zhltbs m =hltbs m
wherein, zhltbs m Representation optimization processThe high frequency coefficient of the m-th wavelet; ht 1 And ht 2 Indicating preset first and second selection parameters, bv (hltbs) m ) Represents the judgment function, if ltbs m Less than 0, bv (hltbs) m ) Has a value of-1, if ltss m Equal to 0, then bv (hltbs) m ) Is 0, if ltbs m Greater than 0, bv (hltbs) m ) A value of (1) denotes a control coefficient, sc denotes a change rate parameter, and is used for controlling the zhltbs m And ltbs m The rate of change therebetween;
the reaction solution is mixed with zhltbs m And reconstructing the wavelet low-frequency coefficient to obtain a second optimized image.
During the second optimization, a proper optimization processing function is automatically selected for different wavelet high-frequency coefficients through the first selection parameter and the second selection parameter to perform optimization processing calculation, so that the pertinence of the optimization processing function is stronger, the accurate optimization processing of the wavelet high-frequency coefficients is realized, the optimization effect of the second optimized image is improved, and more detail information is reserved while the noise of the image is effectively suppressed. Meanwhile, the control coefficient and the change rate parameter are set in the optimization processing function, so that over-optimization is avoided, and the accuracy of the second optimization processing is further improved.
Preferably, the preset noise reduction algorithm includes a non-local mean noise reduction algorithm.
Preferably, the main control device controls the camera to acquire the gesture image through the following modes:
judging whether a shelter exists in front of the camera;
if a shielding object exists in front of the camera, judging whether the shielding object is a palm or not through a skin color detection model,
if no shielding object exists in front of the camera, entering the next acquisition cycle;
if the shielding object is a palm, shooting the palm to obtain a gesture image when the trainer presses the pressing simulation device;
and if the shelter is not the palm, entering the next acquisition cycle.
For example, when it is determined that the type of error in the pressing motion of the trainer is one-hand pressing, the trainer is prompted through the second display and the speaker to stretch out both hands to perform the pressing motion.
Preferably, the step of analyzing the pressing action of the trainer based on the force data and obtaining corresponding action prompt information comprises the following steps:
acquiring wave crests and wave troughs of the force data;
calculating the force of the pressing action of the trainer based on the wave crest and the wave trough of the force data;
judging whether the force is greater than a preset force threshold value or not;
if not, generating action prompt information for prompting the trainer to increase the pressing force.
Preferably, the analyzing the pressing action of the trainer based on the depth data and obtaining the corresponding action prompt information comprises:
acquiring peaks and troughs of the depth data;
calculating a depth of a compression action of the trainer based on peaks and troughs of the depth data;
judging whether the depth is larger than a preset force threshold value or not;
if not, generating action prompt information for prompting the trainer to increase the pressing force.
Preferably, the master control device is further configured to obtain a pressing frequency of the pressing motion by obtaining a time interval of a trough time of depth data or force data generated by two adjacent pressing motions;
if the frequency is larger than a preset frequency threshold interval, generating action prompt information for prompting the trainer to reduce the pressing frequency;
and if the frequency is less than a preset frequency threshold interval, generating action prompt information for prompting the trainer to improve the pressing frequency.
The invention can analyze training data such as force data, depth data, gesture images and the like generated when a trainer presses the pressing simulation device, thereby giving corresponding action prompt information to the trainer and realizing automatic correction prompt of the wrong action of the trainer. The present invention also enables automatic answers to the trainee's voice questions. Meanwhile, the training data can be transmitted to the cloud server, so that trainers can conveniently check the training results at any time and any place.
The invention realizes real-time gesture recognition, depth, force and frequency detection of pressing action and comprehensive analysis of multi-perception, thereby realizing efficient teaching.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A cardio-pulmonary resuscitation teaching machine is characterized by comprising a trunk simulation device, a master control device, a display and a camera;
the trunk simulation device comprises a trunk shell, a pressing simulation device and a sensor device;
the sensor device, the display and the camera are respectively connected with the main control device;
the sensor device is used for acquiring force data and depth data generated when a trainer presses the pressing simulation device and transmitting the force data and the depth data to the main control device;
the camera is used for acquiring a gesture image when a trainer presses the pressing simulation device and transmitting the gesture image to the main control device;
the main control device is used for analyzing the pressing action of the trainer based on the force data, the depth data or the gesture image and obtaining corresponding action prompt information;
the display is used for displaying the action prompt information to the trainer;
analyzing the pressing action of the trainer based on the gesture image, and obtaining corresponding action prompt information, wherein the action prompt information comprises the following steps:
acquiring feature information contained in the gesture image;
comparing the characteristic information with the characteristic information of a pre-stored gesture standard image, and judging the type of errors existing in the pressing action of the trainer;
acquiring prestored corresponding action prompt information according to the type of the error;
the acquiring the feature information contained in the gesture image comprises the following steps:
carrying out graying processing on the gesture image to obtain a gesture grayscale image;
performing first optimization processing on the gesture gray level image to obtain a first optimized image;
performing image enhancement processing on the first optimized image to obtain a detail enhanced image;
performing second optimization processing on the detail enhanced image to obtain a second optimized image;
using an LBP feature extraction algorithm to obtain feature information contained in the second optimized image;
the performing a first optimization processing on the gesture grayscale image to obtain a first optimized image includes:
performing first optimization processing on the gesture gray level image in a batch optimization processing mode:
storing non-edge pixel points contained in the gesture gray level image into a set totlU;
first batch optimization treatment:
put the grey value in the totlU set atThe pixel points are stored in a first optimization processing set optcu 1 Performing the following steps; gma and gmi respectively represent the maximum value and the minimum value of the gray values in the gesture gray image; cstcf represents a preset constant type parameter;
for optcu 1 In (1)Pixel point optc 1 Judging whether the gesture gray scale image is a noise point or not through a preset judgment algorithm, if so, performing noise reduction on the gesture gray scale image by using a preset noise reduction algorithm to obtain a first batch of optimized processing images optimg 1 ;
Will optcu 1 Deleting the pixel points contained in the image data from the totlU to obtain a set totlU of unprocessed pixel points 1 ;
The nth batch optimization processing, wherein n is more than or equal to 2:
and (4) recording a set of unprocessed pixel points obtained after the n-1 st batch of optimization processing as totlu n-1 ;
Will gather totlu n-1 Middle gray scale value ofThe pixel points are stored in the nth batch optimization processing set optcu n The preparation method comprises the following steps of (1) performing;
for optcu n Middle pixel point optc n Judging whether the image is a noise point or not through a preset judgment algorithm, if so, using the preset noise reduction algorithm to optimally process the image optimg in the (n-1) th batch n-1 The nth batch of optimized processing images optimg are obtained by carrying out noise reduction processing on the image n ;
Will opentcu n The pixel point contained in the slave totlu n-1 Deleting to obtain a set totlu of unprocessed pixel points n ;
Wherein the value of N ∈ [2, N ], N is calculated by:
optimizing the Nth batch of processed images optimg N As a first optimization image.
2. The machine of claim 1, wherein the compression simulation device comprises a spring, a steel plate, a steel base, and a simulated human skin covering the outside of the torso shell, the spring, the steel plate, and the steel base are disposed inside the torso shell;
the steel base is arranged at the bottom of the trunk shell;
one end of the spring is connected with the steel base, and the other end of the spring is connected with the steel plate;
the steel plate is connected with the top of the trunk shell.
3. A machine for teaching cardio pulmonary resuscitation according to claim 2, wherein the sensor means comprises a depth detection sensor and a force detection sensor;
the depth detection sensor is used for acquiring the falling depth of the steel plate when the trainer presses the pressing simulation device;
the force detection sensor is used for acquiring the pressure on the steel plate when the trainer presses the pressing simulation device.
4. A machine for teaching cardio pulmonary resuscitation according to claim 3, wherein said sensor means further comprises an MCU;
the depth detection sensor and the force detection sensor are respectively connected with the MCU;
the depth detection sensor and the force detection sensor are respectively used for transmitting the descending depth and the pressure to the MCU;
and the MCU is used for transmitting the descending depth and the pressure to the main control device through an RS232 communication serial port.
5. The cardiopulmonary resuscitation teaching machine of claim 1, further comprising a microphone, a speaker, a wireless communication device, and a cloud server;
the microphone, the loudspeaker and the wireless communication device are respectively connected with the master control device;
the master control device is further used for sending the force data, the depth data and the gesture image to the cloud server through the wireless communication device;
the loudspeaker is used for acquiring the question voice of the trainer and transmitting the question voice to the main control device;
the main control device is also used for transmitting the questioned voice to the cloud server and receiving answer information returned from the cloud server;
the loudspeaker is used for playing the answer information to the trainer.
6. The machine of claim 5, wherein the speaker is further configured to play the action prompt message to the trainer.
7. The CPR teaching machine of claim 5, wherein the microphone, the speaker and the wireless communication device are respectively connected to the main control device through USB communication interfaces.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110522407.0A CN113240964B (en) | 2021-05-13 | 2021-05-13 | Cardiopulmonary resuscitation teaching machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110522407.0A CN113240964B (en) | 2021-05-13 | 2021-05-13 | Cardiopulmonary resuscitation teaching machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113240964A CN113240964A (en) | 2021-08-10 |
CN113240964B true CN113240964B (en) | 2023-03-31 |
Family
ID=77134047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110522407.0A Active CN113240964B (en) | 2021-05-13 | 2021-05-13 | Cardiopulmonary resuscitation teaching machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113240964B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888944B (en) * | 2021-11-01 | 2023-03-31 | 郑州大学第一附属医院 | Cardiopulmonary resuscitation simulation training system and method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252884A (en) * | 2005-09-05 | 2008-08-27 | 柯尼卡美能达医疗印刷器材株式会社 | Image processing method and image processing device |
CN106780453A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of method realized based on depth trust network to brain tumor segmentation |
CN106846403A (en) * | 2017-01-04 | 2017-06-13 | 北京未动科技有限公司 | The method of hand positioning, device and smart machine in a kind of three dimensions |
CN107948520A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN108028687A (en) * | 2015-09-01 | 2018-05-11 | 高通股份有限公司 | Optimize multiple-input and multiple-output operation |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011011633A2 (en) * | 2009-07-22 | 2011-01-27 | Atreo Medical, Inc. | Optical techniques for the measurement of chest compression depth and other parameters during cpr |
CN101964108B (en) * | 2010-09-10 | 2013-01-23 | 中国农业大学 | Real-time on-line system-based field leaf image edge extraction method and system |
CN102982511B (en) * | 2012-09-17 | 2015-09-09 | 中国人民解放军理工大学气象学院 | A kind of image intelligent optimized treatment method |
TWI508034B (en) * | 2014-01-08 | 2015-11-11 | Ind Tech Res Inst | Cpr teaching system and method |
JP2015180045A (en) * | 2014-02-26 | 2015-10-08 | キヤノン株式会社 | image processing apparatus, image processing method and program |
US9754377B2 (en) * | 2014-08-15 | 2017-09-05 | Illinois Institute Of Technology | Multi-resolution depth estimation using modified census transform for advanced driver assistance systems |
CN205881286U (en) * | 2016-03-25 | 2017-01-11 | 中山大学孙逸仙纪念医院 | Cardiopulmonary resuscitation simulates training system |
CN206441443U (en) * | 2016-11-30 | 2017-08-25 | 北京德美瑞医疗设备有限公司 | One kind visualization model for training on cardio-pulmonary resuscitation |
US20190019272A1 (en) * | 2017-07-13 | 2019-01-17 | Qualcomm Incorporated | Noise reduction for digital images |
CN209859455U (en) * | 2019-04-11 | 2019-12-27 | 西安交通大学医学院第一附属医院 | Cardiopulmonary resuscitation simulation training teaching model |
CN110782413B (en) * | 2019-10-30 | 2022-12-06 | 北京金山云网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN111027395A (en) * | 2019-11-13 | 2020-04-17 | 珠海亿智电子科技有限公司 | Gesture recognition method and device, terminal equipment and computer readable storage medium |
CN111091732B (en) * | 2019-12-25 | 2022-05-27 | 塔普翊海(上海)智能科技有限公司 | Cardiopulmonary resuscitation (CPR) instructor based on AR technology and guiding method |
CN111653169A (en) * | 2020-07-20 | 2020-09-11 | 向心引力(深圳)科技有限公司 | Cardio-pulmonary resuscitation training and first-aid integrated machine and training method thereof |
CN111862758A (en) * | 2020-09-02 | 2020-10-30 | 思迈(青岛)防护科技有限公司 | Cardio-pulmonary resuscitation training and checking system and method based on artificial intelligence |
CN111899593A (en) * | 2020-09-21 | 2020-11-06 | 林碧琴 | Intelligent recognition comparison training system |
-
2021
- 2021-05-13 CN CN202110522407.0A patent/CN113240964B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252884A (en) * | 2005-09-05 | 2008-08-27 | 柯尼卡美能达医疗印刷器材株式会社 | Image processing method and image processing device |
CN108028687A (en) * | 2015-09-01 | 2018-05-11 | 高通股份有限公司 | Optimize multiple-input and multiple-output operation |
CN106780453A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of method realized based on depth trust network to brain tumor segmentation |
CN106846403A (en) * | 2017-01-04 | 2017-06-13 | 北京未动科技有限公司 | The method of hand positioning, device and smart machine in a kind of three dimensions |
CN107948520A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method and device |
Non-Patent Citations (2)
Title |
---|
张娟.基于对比度增强的CCTV降质图像的实时清晰化处理.中国优秀硕士学位论文全文数据库 (信息科技辑).2013,(第4期),全文. * |
杜晓刚.图像引导放疗中的医学图像配准关键技术研究.中国博士学位论文全文数据库 (信息科技辑).2019,(第1期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN113240964A (en) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107485844B (en) | Limb rehabilitation training method and system and embedded equipment | |
CN108734104B (en) | Body-building action error correction method and system based on deep learning image recognition | |
TWI508034B (en) | Cpr teaching system and method | |
WO2017161733A1 (en) | Rehabilitation training by means of television and somatosensory accessory and system for carrying out same | |
CN113516064A (en) | Method, device, equipment and storage medium for judging sports motion | |
CN110990649A (en) | Cardiopulmonary resuscitation interactive training system based on gesture recognition technology | |
CN113240964B (en) | Cardiopulmonary resuscitation teaching machine | |
Wattanasoontorn et al. | A kinect-based system for cardiopulmonary resuscitation simulation: A pilot study | |
CN107169427B (en) | Face recognition method and device suitable for psychology | |
CN110767005A (en) | Data processing method and system based on intelligent equipment special for children | |
CN115101191A (en) | Parkinson disease diagnosis system | |
CN110909621A (en) | Body-building guidance system based on vision | |
CN107944754A (en) | Method, apparatus, storage medium and the electronic equipment of rehabilitation performance rating | |
Engan et al. | Chest compression rate measurement from smartphone video | |
Zestas et al. | A computer-vision based hand rehabilitation assessment suite | |
CN111312363B (en) | Double-hand coordination enhancement system based on virtual reality | |
CN116059600B (en) | Rehabilitation training device, training method and evaluation system based on interactive projection | |
CN112230777A (en) | Cognitive training system based on non-contact interaction | |
US20230237677A1 (en) | Cpr posture evaluation model and system | |
CN116704603A (en) | Action evaluation correction method and system based on limb key point analysis | |
CN116109818A (en) | Traditional Chinese medicine pulse condition distinguishing system, method and device based on facial video | |
CN116152924A (en) | Motion gesture evaluation method, device and system and computer storage medium | |
JP2022058315A (en) | Assist system, assist method and assist program | |
CN106096314A (en) | A kind of CPR training and assessment system and method | |
CN110909609A (en) | Expression recognition method based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |