CN114140749A - Chest postoperative respiratory recovery monitoring devices - Google Patents
Chest postoperative respiratory recovery monitoring devices Download PDFInfo
- Publication number
- CN114140749A CN114140749A CN202111480446.5A CN202111480446A CN114140749A CN 114140749 A CN114140749 A CN 114140749A CN 202111480446 A CN202111480446 A CN 202111480446A CN 114140749 A CN114140749 A CN 114140749A
- Authority
- CN
- China
- Prior art keywords
- patient
- image
- images
- risk
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Detecting, measuring or recording devices for evaluating the respiratory organs
- A61B5/0816—Measuring devices for examining respiratory frequency
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Pulmonology (AREA)
- Physiology (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to a monitoring device for postoperative respiration recovery of a chest, which is used for acquiring images of the respiration state of a patient through a visual sensor, preprocessing the images by adopting a multi-scale template and describing the respiration state by utilizing various high-dimensional characteristic variables, so that the image characteristics on multiple scales can be recognized, the robustness of risk recognition is improved, and the monitoring device is particularly suitable for accurately monitoring the respiration state of the patient in a bed period after a chest opening operation.
Description
Technical Field
The invention belongs to the field of medical instruments, and particularly relates to a monitoring device for postoperative recovery of chest respiration.
Background
With the rapid development of artificial intelligence technology, intelligent algorithms play an increasingly important role in medical diagnosis. The thoracotomy is an important means for treating a plurality of diseases such as the progressive hemorrhage in the pleural cavity, the damage of the trachea and the bronchus, the esophageal rupture, the combined injury of the chest and the abdomen, the great vessels of the heart, the severe lung laceration and the like at the present stage. Although open chest surgery can effectively relieve economic symptoms, the surgery itself is more traumatic and easily increases the risk of postoperative complications of patients, such as edema, and internal sputum. Without timely and effective intervention, the respiratory function and physical performance of the patient are greatly affected. The open chest surgery is a common clinical treatment means, and has wide application in clinical treatment of heart diseases, lung diseases and other diseases. From the analysis of the actual situation of the chest operation, on the basis of improving the disease, the respiratory function of the patient can be influenced to a great extent by the traumatic influence of the operation. Therefore, the monitoring of the respiration of the patient after the operation is an important task in clinical practice after the operation, the treatment condition and the operation recovery condition of the patient condition are evaluated by monitoring the respiration state of the patient, and the respiratory state is automatically reported when abnormal so as to take medical measures in time. However, after surgery, the patient is usually in bed, and the difference between wearing clothes and daily life is large. Particularly, some patients need to observe in an ICU, and various pipelines such as a trachea, a stomach tube and the like must be inserted into the ICU, which also interfere with judgment, and postoperative patients have weak breath and small body fluctuation degree. And so on, limit accurate monitoring of the patient's breathing.
The medical monitoring method and device based on the vision sensor have the advantages of no contact, no wound and easiness in use, so that the prior art has a technology for detecting the breath by using an image processing method, however, the method generally needs to identify contour information, so that a processing algorithm is abnormally complex and has insufficient accuracy. For example, complex processing procedures are usually required, including denoising, enhancing and the like of the image frame, the calculation of the breathing frequency according to the change rule of the contour features with time and the like by contour feature extraction. This is not only time consuming, but also susceptible to interference from the surrounding environment, wear. There is also a method (for example, application No. 202010486719) in which radar is used in combination with vision, and although accuracy can be improved to some extent, it is also easily disturbed by clothing or the like. Based on this, recognition by using a neural network has been proposed in the prior art, but such learning models are complex, and sometimes even require dimension reduction on data, thereby reducing the amount of computation, but this has raised the technical problem of limited recognition accuracy. In particular, in the case of an open chest surgery, the patient breathes weakly, the ICU lasts long, the body posture is stiff, various ducts are disturbed more, and particularly, the correlation between the chest contour change and the breathing movement after the open chest is reduced. None of the above prior art techniques solves these problems.
Therefore, the device special for monitoring respiration after the thoracotomy is continued, various image information can be fully utilized, and the respiration risk of the patient can be efficiently, quickly and accurately forecasted.
Disclosure of Invention
The application provides a monitoring device for the recovery of respiration of a chest after operation, which monitors the preoperative respiration state of a patient through a visual sensor, acquires the normal state data of the preoperative respiration state, and establishes a model to learn the data; after chest surgery is performed on a patient, the patient is monitored by the same monitoring equipment, postoperative respiratory state data are collected, postoperative data are evaluated according to a previously learned model, and a risk evaluation estimation value is output. The image is preprocessed by adopting the multi-scale template in the preprocessing stage, so that the image characteristics on multiple scales can be identified, and the robustness of risk identification is improved.
A chest postoperative respiratory recovery monitoring device comprises two cameras and a server;
wherein the two cameras are fixed on a fixed rod parallel to the upper body direction of the patient and are sequentially installed along the body direction of the patient; for including acquiring images of the face and chest of a patient;
a processor in each camera pre-processes the acquired images, wherein the pre-processing comprises utilizing three sizesThe same template A, B, C filters the image to obtain a filtered image for the first camera And a filtered image of a second camera
The following operations are carried out in the server:
for imagesPerforming apparent feature filtering to obtain multi-scale apparent features For imagesAnd imageObtaining multi-scale parallax features after performing parallax feature filtering
Combining 6 high-dimensional characteristic variables YsBy excitation function sigma (x) and multi-scale apparent characteristics And multi-scale parallax featuresAssociating;
establishing a learning model, and learning and determining a model coefficient according to a real collected image sample, wherein a cost function in the learning process is as follows:
whereinRespectively corresponding to that no respiratory risk is monitored and a respiratory risk exists in the currently input time sequence image, wherein the estimated value of the risk assessment variable and z are real values of the risk assessment variable, and z is 0 or 1; theta is a control parameter; n represents the total number of different time series of images acquired.
The following operations are also performed in the server: processing a plurality of images of the patient acquired after the operation of the patient, and calculating a risk assessment result z.
And when the value of z is more than 0.6, the risk of the respiratory abnormality of the patient is considered to exist at present, and the server reports the risk to the alarm device.
The two cameras can completely shoot the face and the upper chest of the patient, and the parallax of the face of the patient in the two cameras is larger than 1/20 of the imaging range.
The optical axes of the cameras are kept parallel to each other, and the parameters of the lenses of the two cameras and the parameters of the imaging sensor are kept completely consistent, and the two cameras synchronously shoot images at a certain frame rate F;
take F to 10 frames per second.
When the risk of abnormal breathing of the patient exists at present, the server writes the data into the electronic medical record.
The acquisition period T of the camera for acquiring the images meets the condition that T is more than or equal to 5F.
The alarm device is a hand-held terminal or a display positioned in the ward.
The camera is connected with the server through a communication network.
The invention has the following advantages:
1. the abnormal breathing state of the patient is monitored by utilizing the unique learning model, and the high-dimensional characteristics used by the model not only comprise time sequence characteristics reflecting the breathing frequency of the patient, but also comprise information such as facial characteristics of the patient, so that the breathing state of the patient can be monitored from a wider dimension, and the risk reflected by the breathing state of the patient can be identified more specifically.
2. The model automatically generates high-dimensional characteristic data reflecting the respiratory state of the patient by using the images acquired by the visual sensor, wherein the high-dimensional characteristic data comprises multi-scale apparent characteristics, parallax characteristics and time sequence characteristics of the visual data, and the high-dimensional characteristic data is learned to establish the respiratory state monitoring model of the patient.
3. The unique patient respiratory state monitoring model is utilized to evaluate the risk of the respiratory state of the patient in real time from multiple dimensions of time and space, and a doctor is reminded to manually pay attention to and intervene in the patient when the risk occurs, so that the manual workload is greatly reduced; the abnormal risk of the respiratory state of the patient can be found in time, and the automation degree and the response speed of respiration recovery monitoring after the thoracic operation are improved.
4. By utilizing the optimized preprocessing template, the image noise can be better filtered, and the local continuity of the image is kept, so that the burden of a subsequent algorithm is reduced, and meanwhile, the accuracy of feature identification is improved. Meanwhile, a special excitation function and a cost function are designed, the applicability of the model to various medical environments is guaranteed, and early warning can be efficiently and accurately performed.
Drawings
Fig. 1 is a schematic diagram of a front-end acquisition device deployment.
Detailed Description
S1: image acquisition of patient breathing
The method comprises the steps of installing camera equipment right in front of a patient to be monitored, collecting time-sequenced visual images by using a camera, and transmitting the collected images to a background server in real time. The specific method is described below.
S1.1A method for acquiring a frontal image of a patient with binocular visual angles is characterized in that when the patient naturally breathes in a basically stable state, a group of two cameras are arranged on the patient in a frontal direction, and the face and the upper chest of the patient are completely shot. The two cameras are fixed on a fixed rod parallel to the upper body direction of a patient, are sequentially installed along the body direction of the patient (figure 1), are spaced by about 10 centimeters, the actual installation distance can be finely adjusted according to the field environment and the body size of the patient, the two cameras can be kept to completely shoot the face and the upper chest of the patient, and the parallax of the face of the patient in the two cameras is larger than 1/20 of the imaging range, so that the image quality can be ensured under the condition of small data volume, and a foundation is provided for subsequent processing; the optical axes of the cameras are kept parallel to each other, and the parameters of the two camera lenses and the parameters of the imaging sensor are kept completely consistent. The two cameras are numbered as camera I and camera J, respectively. The two cameras synchronously shoot images at a certain frame rate F, and the shot images are numbered according to time sequence. Preferably, F is 10 frames per second.
S1.2, after any camera collects a frame of image, preprocessing the frame of image by adopting the following method, wherein each preprocessed image generates 3 corresponding preprocessing results.
A, B, C are defined as three preset template windows respectively, an
The size of the window A is 3 x 3, the size of the window B is 2 times that of the window A, the size of the window C is twice that of the window B, and the three windows are used for extracting image features under different scales respectively, so that apparent features from micro (small range) to macro (large range) in an image are better identified, and the robustness of risk identification is improved. The values of the window are as shown above, and the standard is as follows: if a certain coordinate is in 1/3 range of the window center in two directions, the weight value is 1; in one direction within 1/3 of the window center, the value is 0.6; the value is 0.1, which is the edge of the window in one direction and does not satisfy any of the aforementioned conditions; the rest is 0.4. It can be seen that the weight values at different coordinate positions of the window are in a direct proportion relation with the distance from the weight values to the center of the window, which is equivalent to an approximate Gaussian filter, but the weight values are kept consistent within a certain range, so that the spatial characteristics of the image can be further kept, and the accuracy of subsequent model learning is improved.
For an image I captured by a camera I at a time ttThree images are obtained after the three images are filtered by the A, B, C windowSimilarly, for an image J captured by the camera J at time ttThree images are obtained after the three images are filtered by the A, B, C window
Wherein the two parameters in parentheses following the image identifier are the position coordinates of a pixel in the image, e.g. It(u + p, v + q) represents the image ItPixel coordinates with position u + p, v + q, a (p, q) denotes the coordinates with position p, q in window a,representing imagesThe median is the pixel coordinate of u, v. The rest is analogized in the same way.
In the above formula, B (p, q), It(u+p,v+q)、Jt(u+p,v+q)、Has a meaning similar to that of formula (1),is an intermediate value obtained by calculation according to the right end of equation (2), max [, ]]The four elements in the parenthesis are shown to take a maximum value. According to the above two formulas, it can be seen that,means that the intermediate result image is scaled down by It、J t1/2 and the reduction is by taking the maximum value among the corresponding 4 pixels of the source image.
In the above formula, C (p, q), It(u+p,v+q)、Jt(u+p,v+q)、Has a meaning similar to that of formula (1),is an intermediate value obtained by calculation from the right end of equation (4), max [, ]]To representThe 16 elements in brackets take a maximum value. According to the above two formulas, it can be seen that,means that the intermediate result image is scaled down by It、J t1/4 and the reduction is by taking the maximum value among the corresponding 16 pixels of the source image.
It can be understood that the template is compared with a classical gaussian template, and a large number of templates are optimized experimentally, so that compared with the classical template, the noise of the image can be better filtered, and the local continuity of the image is maintained, thereby reducing the burden of a subsequent algorithm and improving the accuracy of feature identification.
S2: high-dimensional image feature generation
And (3) generating high-dimensional characteristics reflecting the breathing condition of the patient according to the preprocessed time sequence image sequence acquired in the step (1) for establishing a breathing monitoring model and risk identification.
The high-dimensional image features comprise multi-scale appearance features, parallax features and time sequence features of the visual image.
S2.1 to obtain the multi-scale apparent features, first an image I is acquired from one of the two cameras of step 1tPreprocessing the three images by the formulas (1) - (5) to obtain three filtered images
Wherein the content of the first and second substances,the meaning is as described in the foregoing, α, β, γ are multi-scale feature windows, p, q are coordinates of two dimensions in the multi-scale feature windows, the value range is as shown in formula (6), representing the result of the computation of the image after feature filtering. The multi-scale appearance features are defined as alpha, beta, gamma by the above formula.
S2.2 to obtain the multi-scale parallax features, on the basis of the above, calculating a parallax image
Image I collected from one of the two cameras in step 1tPreprocessing the three images by the formulas (1) - (5) to obtain three filtered imagesThe image collected by another camera is preprocessed to obtain three images
Wherein, | | denotes taking an absolute value symbol,respectively representValues at the u, v coordinates.
Defining:
wherein the content of the first and second substances,the meaning is as described above, mu, pi and rho are multi-scale characteristic windows, p and q are coordinates of two dimensions in the multi-scale characteristic windows, the value range is as shown in formula (8), representing the calculation result of the parallax image after feature filtering. The multi-scale parallax features are μ, pi, ρ as defined by equation (8) above.
S2.3 to obtain the multi-scale time sequence characteristics, T images I continuously acquired in time sequence are taken1、I2、…、ITAnd calculating corresponding filtering images according to the method in the step 1 Further according to the method described in steps S2.1, S2.2 In order to generate data containing enough information, T is more than or equal to 5F; f is the image acquisition frame rate in step 1, preferably, F is 10, and T is 50, that is, the acquisition period is 5 seconds. For clarity of presentation, let:
in the above formula, t represents a time sequence coordinate, namely the sequence of the collected image frames; moving t from the subscript to the right of the equation to the argument to the left of the equation aims to make the logical relationship between the different steps more intuitive and easy to express.
Further, let:
wherein, X1(u,v,t)、X2(u,v,t)、…、X6The meaning of (u, v, t) is as described above, Φ, Ψ, Ω are three-dimensional multi-scale feature windows, p, q, r are coordinates of three dimensions in the multi-scale feature windows, and the value range is as shown in formula (9). Y is1(u,v,t)、Y2(u,v,t)、…、Y6And (u, v, t) represents a calculation result of the time sequence of the images after being filtered by the three-dimensional multi-scale feature window. The multi-scale timing characteristics are phi, psi, omega as defined by the above formula. b1、b2、b3Is a bias parameter.
Where σ (x) is a non-linear function:
arctanx represents an arctangent trigonometric function, and the parameter delta can enable the function to generate a discontinuous break point at the point where x is 0, which is helpful for improving the learning effect of the model. The parameter e is a control variable for controlling the convergence speed of the nonlinear function in the subsequent learning process, and preferably, e is 11.5, and δ is 0.009.
According to the steps, the multi-scale apparent characteristics alpha, beta, gamma, parallax characteristics mu, pi, rho and time sequence characteristics phi, psi and omega of the visual image are obtained.
The multi-scale appearance features are used for describing static appearance features of the patient in a real environment, such as face color, expression and the like; the parallax feature is used for describing the distribution difference of the static apparent observation of the patient on different visual angles, and the difference can generate larger change when the patient breathes and can be used for reflecting the breathing state of the patient; the time sequence characteristic is used for describing the process of the respiration of the patient and is an important reference for the respiration state of the patient. The invention creatively provides the three characteristics and combines the three characteristics into high-dimensional characteristics, and the effectiveness of the method in the aspects of monitoring the respiratory state of the patient and evaluating the risk is proved through experimental data.
S3: learning of patient respiratory monitoring models
And (3) establishing a learning model, and learning and determining the value of the high-dimensional image characteristics in the step (2) according to the data samples collected in reality.
The output of the patient respiratory state monitoring model is a risk assessment variable z, the value range of the risk assessment variable z is [0,1], and when the assessment variable tends to 0, the patient respiratory state tends to a normal range; when the evaluation variable tends to 1, the breathing state of the patient tends to an abnormal range, the risk degree is higher, and a countermeasure should be taken.
Defining:
wherein s belongs to {1,2,3,4,5,6} and corresponds to six outputs in formula (9) in step 2, and the value ranges of p, q and r are different according to the value of s; when s is 1 and s is 4, the value of p and q is equal to the corresponding dimension of the acquired image; when s is 2 and s is 4, the value of p and q is equal to 1/2 of the corresponding dimension of the acquired image; when s is 3 and s is 6, the value of p and q is equal to 1/4 of the corresponding dimension of the acquired image; r has a value range of [1, T]And T represents the length of time for which the image sequence was acquired. Γ(s) is YsA weighted cumulative sum of (p, q, r), with its corresponding weight defined by w (p, q, r, s). c. CsIs a linear bias parameter corresponding to Γ(s). σ (x) is defined as in formula (10) in step 2.
Γ(s) contains 6 components by definition. Let the risk assessment variable z take the value:
and finishing establishing a patient respiratory state monitoring model. Where z is the weighted sum of the six components of Γ(s), the weight being defined by τ(s), and d is the linear bias parameter.
The respiratory state monitoring model is learned by adopting the existing methods such as a back propagation method and the like.
According to the step 1, a plurality of groups of time sequence image sequences are collected before a patient operation and used as training samples, and whether the patient has the risk of breathing abnormity during the period corresponding to the training samples is manually recorded; for the original captured image in the training sample, the filtered image corresponding to the original captured image can be obtained according to equations (1) - (4) and used as the input data of step 2.
According to the input data obtained in the step 2 and the step 1, all the characteristics are assigned to be 1, and the parameter b is biased1、b2、b3The value is assigned to 0, and according to the formulas (5) to (9), the input Y required by the formula (11) in the step 3 can be obtaineds(p,q,r);s∈{1,2,3,4,5,6}。
According to the input data Y obtained in the step 3 and the step 2s(p, q, r), bias parameter csD is assigned to be 0, and the estimated value of the risk assessment variable is obtained according to the respiratory state monitoring model established by the formulas (11) and (12)
Since the values of the features and the bias parameters in the previous step are initial values assigned arbitrarily, the estimated valuesThere should be a large error from the true value z of the risk assessment variable. The learning is the process of reducing this error. Order:
wherein the content of the first and second substances,the sum of the estimated values of all learning samples is expressed, and the real value z of the sample is 0 or 1, which respectively corresponds to the fact that no breathing risk is monitored and the breathing risk is present in the currently input time sequence image. Theta is a control parameter, and is usually 0 according to the real distribution of the samples<θ<1, in this case, θ is preferably 0.15. N represents the participation in learningThe total number of samples of the process, i.e. the total number of different time series of images acquired.
Using a back propagation method, the goal being toAnd (3) minimizing, iterating sample by sample to obtain the multi-scale apparent features alpha, beta, gamma, parallax features mu, pi, rho, time sequence features phi, psi and omega in the step (2) and the optimized values of the bias parameters, and finishing learning.
S4: model-based patient respiration monitoring and risk identification
And (3) calculating the risk probability corresponding to the input time sequence image sequence according to the patient respiration monitoring model learned in the step (3).
According to the method in the step 1, after a patient operation, a plurality of images of the patient are collected at the front end and transmitted to a back end server; and (3) the back-end server forms a time sequence image sequence according to the time sequence, each image is preprocessed, and the result is input to the step (2).
And (4) obtaining a risk evaluation result z corresponding to the time sequence image sequence according to the methods in the step 2 and the step 3 and the optimized values of the characteristics and the parameters learned in the step 3 and the formulas (5) to (12).
And when the value of z is more than 0.6, the risk of the respiratory abnormality of the patient is considered to exist currently, and the risk is reported.
437 case data experiments in the hospital obtain the risk report correct rate of the method, the average response time of the method for patients with respiratory abnormality after the method is adopted, and the comparison with the traditional methods.
Method example comparison | Accuracy (%) | Anomaly occurrence response time |
Conventional image processing method | 54.9% | 15 seconds |
Universal neural network method | 88.2% | 89 seconds |
Artificial operation | 68.4% | 357 second |
This application | 91.1% | 17 seconds |
Therefore, the method can better identify the phenomenon that the patient has breathing abnormality, greatly improve the response time for finding risks, and give consideration to the accuracy and the corresponding time compared with the traditional methods, so that the method for monitoring the postoperative breathing recovery by using a camera to acquire images becomes a clinical alternative.
The above method is implemented by a breath recovery monitoring device. The device comprises two cameras and a server. The step S1 is completed by two cameras, so the cameras include a processor capable of preprocessing the collected images, the cameras send the preprocessed images to the server through the communication network, and the steps S2-S4 are all completed in the server. The medical care system can further comprise an alarm device, and when the server processes the data to obtain the patient risk information, the server sends the patient risk information to the alarm device so as to prompt medical care personnel to perform corresponding processing. Of course, the relevant data may be stored in the electronic information system of the hospital as part of the electronic medical record. The alarm device can be a handheld terminal or a display located in a ward.
Claims (10)
1. The utility model provides a chest postoperative respiratory recovery monitoring devices which characterized in that: the system comprises two cameras and a server;
wherein the two cameras are fixed on a fixed rod parallel to the upper body direction of the patient and are sequentially installed along the body direction of the patient; for including acquiring images of the face and chest of a patient;
the processor in each camera pre-processes the acquired image, wherein the pre-processing includes filtering the image using three different sized templates A, B, C to obtain a filtered image for the first camera、、And a filtered image of a second camera、、;
The following operations are carried out in the server:
for images、、Performing apparent feature filtering to obtain multi-scale apparent features、、(ii) a For images、、And image、、Obtaining multi-scale parallax features after performing parallax feature filtering、、;
6 high-dimensional characteristic variables are combinedBy excitation functionAnd multi-scale apparent features、、And multi-scale parallax features、、Associating;
establishing a learning model, and learning and determining a model coefficient according to a real collected image sample, wherein a cost function in the learning process is as follows:
whereinFor risk assessment variable estimation and z for risk assessment variable trueReal values, z =0 or z =1, corresponding to no respiratory risk being monitored and respiratory risk being present, respectively, in the currently input time series image;is a control parameter; n represents the total number of different time series of images acquired.
2. The apparatus of claim 1, wherein: the following operations are also performed in the server: processing a plurality of images of the patient acquired after the operation of the patient, and calculating a risk assessment result z.
3. The apparatus of claim 1, wherein: the two cameras can completely shoot the face and the upper chest of the patient, and the parallax of the face of the patient in the two cameras is larger than 1/20 of the imaging range.
4. The apparatus of claim 3, wherein: the optical axes of the cameras are kept parallel to each other, and the parameters of the two camera lenses and the parameters of the imaging sensor are kept completely consistent.
5. The apparatus of claim 4, wherein: the two cameras synchronously shoot images at a certain frame rate F.
6. The apparatus of claim 5, wherein: take F =10 frames per second.
7. The apparatus of claim 1, wherein: when the risk of abnormal breathing of the patient exists at present, the server writes the data into the electronic medical record.
9. The apparatus of claim 2, wherein: the alarm device is a hand-held terminal or a display positioned in the ward.
10. The apparatus of claim 1, wherein: and when the value of z is more than 0.6, the risk of the respiratory abnormality of the patient is considered to exist at present, and the server reports the risk to the alarm device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111480446.5A CN114140749A (en) | 2021-12-06 | 2021-12-06 | Chest postoperative respiratory recovery monitoring devices |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111480446.5A CN114140749A (en) | 2021-12-06 | 2021-12-06 | Chest postoperative respiratory recovery monitoring devices |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114140749A true CN114140749A (en) | 2022-03-04 |
Family
ID=80384433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111480446.5A Pending CN114140749A (en) | 2021-12-06 | 2021-12-06 | Chest postoperative respiratory recovery monitoring devices |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114140749A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593308A (en) * | 2024-01-19 | 2024-02-23 | 科普云医疗软件(深圳)有限公司 | Respiration monitoring and early warning method for critically ill respiratory patient |
-
2021
- 2021-12-06 CN CN202111480446.5A patent/CN114140749A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593308A (en) * | 2024-01-19 | 2024-02-23 | 科普云医疗软件(深圳)有限公司 | Respiration monitoring and early warning method for critically ill respiratory patient |
CN117593308B (en) * | 2024-01-19 | 2024-04-26 | 科普云医疗软件(深圳)有限公司 | Respiration monitoring and early warning method for critically ill respiratory patient |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tasli et al. | Remote PPG based vital sign measurement using adaptive facial regions | |
Cheng et al. | Sparse dissimilarity-constrained coding for glaucoma screening | |
EP3676797B1 (en) | Speckle contrast analysis using machine learning for visualizing flow | |
Datcu et al. | Noncontact automatic heart rate analysis in visible spectrum by specific face regions | |
JP2016521411A (en) | Head and eye tracking | |
CN109009052A (en) | The embedded heart rate measurement system and its measurement method of view-based access control model | |
CN116138745B (en) | Sleep respiration monitoring method and device integrating millimeter wave radar and blood oxygen data | |
Bourbakis | Detecting abnormal patterns in WCE images | |
CN109241898B (en) | Method and system for positioning target of endoscopic video and storage medium | |
CN114140749A (en) | Chest postoperative respiratory recovery monitoring devices | |
Kyrollos et al. | Noncontact neonatal respiration rate estimation using machine vision | |
CN111062936B (en) | Quantitative index evaluation method for facial deformation diagnosis and treatment effect | |
CN115312195A (en) | Health assessment method for calculating individual psychological abnormality based on emotion data | |
Yang et al. | Graph-based depth video denoising and event detection for sleep monitoring | |
CN117274270A (en) | Digestive endoscope real-time auxiliary system and method based on artificial intelligence | |
CN115187596A (en) | Neural intelligent auxiliary recognition system for laparoscopic colorectal cancer surgery | |
CN117593308B (en) | Respiration monitoring and early warning method for critically ill respiratory patient | |
CN112716468A (en) | Non-contact heart rate measuring method and device based on three-dimensional convolution network | |
CN114052724B (en) | Orthopedics traction abnormity detection system based on artificial intelligence | |
CN116671902A (en) | Infant movement posture analysis system for assisting in diagnosis of cerebral palsy | |
CN110473180A (en) | Recognition methods, system and the storage medium of respiratory chest motion | |
CN112885435B (en) | Method, device and system for determining image target area | |
US20220087645A1 (en) | Guided lung coverage and automated detection using ultrasound devices | |
CN114663424A (en) | Endoscope video auxiliary diagnosis method, system, equipment and medium based on edge cloud cooperation | |
CN113920071A (en) | New coronavirus image identification method based on convolutional neural network algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |