CN116129525B - Respiratory protection training evaluation system and method - Google Patents

Respiratory protection training evaluation system and method Download PDF

Info

Publication number
CN116129525B
CN116129525B CN202310055814.4A CN202310055814A CN116129525B CN 116129525 B CN116129525 B CN 116129525B CN 202310055814 A CN202310055814 A CN 202310055814A CN 116129525 B CN116129525 B CN 116129525B
Authority
CN
China
Prior art keywords
unit
test
data
dimensional modeling
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310055814.4A
Other languages
Chinese (zh)
Other versions
CN116129525A (en
Inventor
胡晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insititute Of Nbc Defence
Original Assignee
Insititute Of Nbc Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insititute Of Nbc Defence filed Critical Insititute Of Nbc Defence
Priority to CN202310055814.4A priority Critical patent/CN116129525B/en
Publication of CN116129525A publication Critical patent/CN116129525A/en
Application granted granted Critical
Publication of CN116129525B publication Critical patent/CN116129525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Pulmonology (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Social Psychology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a respiratory protection training evaluation system and method, and relates to the technical field of image processing. The system comprises: the system comprises an image acquisition device, a wireless base station and a comprehensive analysis management unit, wherein the image acquisition device is used for acquiring more than two sequence images of a test person on a test field and transmitting the images to the comprehensive analysis management unit through the base station; the comprehensive analysis management unit comprises a display device, a first three-dimensional modeling generating device, a size obtaining device, a second three-dimensional modeling correcting device and a human body parameter determining device, wherein the human body parameter determining device is used for determining the breathing frequency of a test person according to the number of times of the expansion and contraction change of the second three-dimensional modeling in unit time. The system and the method provided by the invention can determine the breathing frequency of the test person in motion through image processing.

Description

Respiratory protection training evaluation system and method
Technical Field
The invention relates to the technical field of image processing, in particular to a respiratory protection training evaluation system and method.
Background
The respiratory rate measurement is mainly measured by visual inspection, and observing the number of times of chest undulation when the subject breathes. There is no report in the prior art of how a participant measures his respiratory rate while running.
Disclosure of Invention
The invention provides a respiratory protection training evaluation system and a respiratory protection training evaluation method, which can remotely determine the respiratory frequency of a test person in a motion state through image processing.
To achieve the object, an aspect of the present invention provides a respiratory protection training evaluation system, which includes: the system comprises an image acquisition device, a wireless base station and a comprehensive analysis management unit, wherein the image acquisition device is used for acquiring more than two sequence images of a test person on a test field and transmitting the images to the comprehensive analysis management unit through the base station;
the comprehensive analysis management unit comprises a display device, a first three-dimensional modeling generating device, a size obtaining device, a second three-dimensional modeling correcting device and a motion parameter determining device, wherein,
the display device comprises a display screen, wherein the display screen is used for displaying a first display area and a second display area which are divided by a limit frame, the first display area is positioned outside an area surrounded by the limit frame on the display screen and is used for displaying a first three-dimensional model obtained after the image acquired by the image acquisition device is subjected to first three-dimensional model generation processing; the second display area is positioned in the area surrounded by the limit frame on the display screen and is used for displaying a second three-dimensional model obtained after the second three-dimensional model correction processing is carried out on the first three-dimensional model in the limit frame;
the first three-dimensional modeling generating device is used for generating a first three-dimensional modeling according to the first sequence image and the second sequence image of the test person obtained by the image acquisition device;
the size acquisition device is used for acquiring size data of different semantic parts of the test staff;
a second three-dimensional modeling correction device for obtaining the point cloud of the test person according to the size data obtained by the size obtaining device, matching the first three-dimensional modeling in the limit frame with the point cloud of the test person, generating a corrected second three-dimensional modeling corresponding to the actual shape of the test person, and
the motion parameter determining device comprises a respiratory frequency calculating unit for determining the respiratory frequency of the test person according to the number of times of the expansion and contraction change of the second three-dimensional modeling in unit time
To achieve the above object, another aspect of the present invention provides a method for performing an evaluation using the respiratory protection training evaluation system.
The respiratory protection training evaluation system and method provided by the invention have the following advantages: the breathing rate of the test person can be determined remotely by image processing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic block diagram of a respiratory protection training evaluation system according to embodiment 1 of the present invention;
FIG. 2 is a schematic block diagram showing a specific example of the first three-dimensional modeling apparatus in embodiment 1 of the present invention;
fig. 3 is a schematic block diagram of a specific example of a second three-dimensional modeling correction device in embodiment 1 of the present invention.
Description of the embodiments
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In describing the present invention, it should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The use of the terms "comprises" and/or "comprising," when used in this specification, are intended to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" includes any and all combinations of one or more of the associated listed items. The terms "connected," "coupled," and "connected" are to be construed broadly, and may be, for example, directly connected, indirectly connected through an intervening medium, or may be in communication between two elements; the connection may be wireless or wired.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Examples
The embodiment provides a respiratory protection training evaluation system, as shown in fig. 1, the system includes an image acquisition device 2, a wireless base station 1 and a comprehensive analysis management unit 3, wherein the image acquisition device is an unmanned aerial vehicle portable image acquisition device and is used for acquiring more than two sequence images of a test person 3 on a test field and sending the images to the comprehensive analysis management unit 3 through the base station 1.
Preferably, the image capturing device 2 includes two or more cameras with adjustable gestures, the cameras being one or more of RGB cameras, 3D cameras, RGBD cameras, 3D sensors, and the like.
The comprehensive analysis management unit 3 comprises a display device, a first three-dimensional modeling generating device, a size acquiring device, a second three-dimensional modeling correcting device and a respiratory frequency determining device, wherein the display device comprises a display screen, the display screen is used for displaying a first display area and a second display area which are divided by a limit frame, the first display area is positioned outside an area surrounded by the limit frame on the display screen, and the first display area is used for displaying a first three-dimensional modeling obtained by performing first three-dimensional modeling generating processing on an image acquired by the image acquiring device; the second display area is positioned in the area surrounded by the limit frame on the display screen and is used for displaying a second three-dimensional model obtained after the second three-dimensional model correction processing is carried out on the first three-dimensional model in the limit frame; preferably, the size of the limit frame displayed on the display screen can be adjusted according to the instruction received by the display device for indicating the parameter for adjusting the size of the limit frame, so that the size of the limit frame displayed on the display screen is changed, and the range of the first three-dimensional modeling and the second three-dimensional modeling displayed inside and outside the limit frame are also adjusted accordingly; preferably, the instruction for indicating to adjust the size parameter of the limit frame can be provided by a touch screen, a key keyboard, a mobile operation rod and/or the like which are connected with the display device;
the first three-dimensional modeling generating device is used for generating a first three-dimensional modeling according to the first sequence image and the second sequence image of the test person obtained by the image acquisition device, wherein the first sequence image and the second sequence image come from different cameras;
the size acquisition device is used for acquiring size data of characteristic parts of different semantics of the test person, wherein the different semantics are used for marking different functional parts of the test person, and the size data of the characteristic parts of the different semantics are one or more than two of data such as chest circumference, abdomen circumference, mouth size and the like;
the second three-dimensional modeling correction device is used for obtaining a point cloud of the test person according to the size data of the test person obtained by the size acquisition device, matching the first three-dimensional modeling in the limit frame with the point cloud of the test person, and generating a corrected second three-dimensional modeling corresponding to the actual shape of the test person;
the motion parameter determining device comprises a respiratory frequency calculating unit, wherein the respiratory frequency calculating unit is used for determining the respiratory frequency of the test person according to the number of times of the expansion and contraction change of the second three-dimensional modeling in unit time.
In the first embodiment, the first three-dimensional shape generating device, the size acquiring device, the second three-dimensional shape correcting device, and the respiratory rate determining device may be each programmed with a computer program to execute program codes of a processor and stored in a storage medium. The processor calls the program code in the storage medium to display the result of the implementation in the display device
According to the respiratory protection training evaluation system provided by the invention, through image processing, the respiratory frequency of a test person is determined according to the frequency of the expansion change of different semantic feature parts of the test person, so that the respiratory frequency of the test person can be determined in a motion state; the method comprises the steps of dividing a generated first three-dimensional model into two parts through the limit frame, respectively displaying the two parts in a first display area and a second display area, obtaining the first three-dimensional model through image estimation obtained by an image acquisition device, and correcting the first three-dimensional model in the limit frame based on actual size data to enable the first three-dimensional model to be more in line with the actual size information of a test person, so that on one hand, errors between a three-dimensional model estimated value and an actual value are eliminated through correction processing, and the reliability of stored three-dimensional model big data is improved; on the other hand, by performing correction processing only for the first three-dimensional model in the bounding box, the data processing amount is reduced, so that the processing complexity is reduced, the processing time is reduced, and the accuracy of the generated three-dimensional model is improved. Through the adjustable size setting of the limit frame, the three-dimensional modeling range which is required to be corrected can be freely selected according to the requirement of a user, and the three-dimensional modeling range is further refined and accurately displayed, so that the man-machine interaction is improved.
Fig. 2 is a schematic block diagram of a specific example of a first three-dimensional modeling generating device in embodiment 1 of the present invention, which, as shown in fig. 2, includes:
the original image acquisition unit is used for acquiring a first sequence image and a second sequence image of the test person, wherein the first sequence image and the second sequence image are two images which are acquired by different cameras at the same moment aiming at the same scene of the test person under different visual angles;
the size segmentation unit is used for segmenting each image of the first sequence into a plurality of first unit graphs, the average pixel difference value of each first unit graph is smaller than or equal to a first threshold value, and the calculation formula of the average pixel difference value of the first unit graphs is as follows:
wherein,n is the total number of pixel points contained in the first unit diagram and is the average pixel difference value of the first unit diagram,is the pixel value of the first pixel point in the first unit diagram,the average pixel value of each pixel point in the first unit diagram;
the parallax image obtaining unit is used for determining a second unit image corresponding to the first unit image in the second sequence image according to the first unit image of the first sequence image, and obtaining translation amounts between all the first unit image and the second unit image which are mutually corresponding to each other to obtain a parallax image;
the semantic segmentation unit is used for inputting the parallax image into a pre-trained neural network model to carry out semantic segmentation to obtain a semantic segmentation image, and the semantic segmentation image is used for distinguishing the characteristics of different semantic characteristic parts of a test person; for example, different semantic features include chest, abdomen, and mouth; the neural network is obtained by training according to a disparity map with characteristic semantics of a reference person marked in advance;
the first three-dimensional modeling reconstruction unit is used for reconstructing the semantic segmentation map to generate a first three-dimensional modeling, and the first three-dimensional modeling is used for describing the characteristics of different semantic characteristic parts of the test staff.
According to the three-dimensional modeling artificial intelligence generation system, the first image is divided into the plurality of first unit diagrams, and then the parallax diagrams are obtained based on the first unit diagrams, so that the accuracy of the obtained parallax diagrams is improved, and the accuracy of the generated three-dimensional modeling is further improved. The first three-dimensional modeling is obtained by reconstructing the disparity map after semantic segmentation, so that the processed data size is reduced, the processing complexity is reduced, and the processing time is greatly shortened.
Fig. 3 is a schematic block diagram of a specific example of a second three-dimensional modeling correction device in embodiment 1 of the present invention, which, as shown in fig. 3, includes:
the to-be-corrected test person feature obtaining unit is used for obtaining one or more than two test person features with different semantics in the limit frame according to the position relation between the limit frame and the first three-dimensional modeling for describing the features of different semantic feature parts of the test person;
a position feature point cloud obtaining unit, configured to obtain a point cloud of one or more different semantic part features of a test person according to the size data of one or more different semantic feature parts corresponding to the one or more different semantic part features in the bounding box, where the size data is obtained by the size obtaining device;
a second three-dimensional modeling reconstruction unit 33, configured to match one or more than two different semantic part features of the test person in the bounding box with corresponding point clouds, and generate a corrected second three-dimensional modeling corresponding to the actual shape of the test person;
a scaling unit for scaling the second three-dimensional model according to the size of the bounding box, so that the matching degree of the second incision edge at the bounding box of the second three-dimensional model and the first incision edge at the bounding box of the first three-dimensional model is larger than or equal to a second threshold value; preferably, the second threshold value can be set according to actual requirements;
and the stitching unit is used for stitching the first incision edge and the second incision edge to obtain a total three-dimensional modeling. Preferably, the first incision edge and the second incision edge are provided with a plurality of vertexes, the vertexes are preferably triangular, the corresponding vertexes are connected in sequence, triangles of the stitching part are added in a list of the triangular surface skin, and the total three-dimensional modeling after stitching is obtained after grid smoothing.
According to the respiratory protection training evaluation system provided by the invention, the authenticity and the credibility of three-dimensional modeling data generated by simulation are improved by correcting the characteristic three-dimensional modeling of the test personnel in the limit frame according to the actual size. By scaling the second three-dimensional model in the bounding box, the matching degree of the edges of the first incision and the second incision is improved, so that the stitching precision is improved, and the smoothness and the integrity of the obtained total three-dimensional model are improved.
In embodiment 1, the comprehensive analysis management unit further includes a test person state determination unit including a CNN neural network and a self-competitive neural network, the CNN neural network acquiring a face sequence image according to the first sequence image or the second sequence image of the test person; the self-competitive neural network learns face images with different mental states into two-dimensional neurons of the self-competitive neural network in advance; when a test participant participates in the test, the self-competitive neural network clusters the current facial image of the test participant with the two-dimensional neurons to determine the current mental state of the test participant.
In the embodiment 1, a test person wears a data acquisition device and a simulation tank, wherein the data acquisition device is used for acquiring body surface temperature data and heart rate data of the test person in real time in the test process and transmitting the body surface temperature data and the heart rate data to the simulation tank; the simulation tank at least comprises a respiratory flow sensor, a respiratory pressure sensor, a positioning sensor, a near-field sensor, a transceiver unit and a processing device, wherein the respiratory flow sensor is used for collecting respiratory flow of a test person; the respiratory pressure sensor is used for collecting respiratory pressure of the test person, and the positioning sensor is used for collecting the position of the test person; the near-field sensor is used for receiving body surface temperature data and heart rate data in the test process of the test person, which are acquired by the wearing data acquisition equipment; the processing device processes the respiratory flow of the test person, the respiratory pressure of the test person and the position of the test person, packages the respiratory flow, the respiratory pressure and the position of the test person together with the body surface temperature data and the heart rate data to form a data frame, and sends the data frame to the transceiver unit, and the transceiver unit processes the data frame and then sends the data frame to the wireless base station through a wireless channel; the wireless base station forwards the modulated signal to the integrated analysis management unit.
In embodiment 1, the wireless base station includes a power amplifier, a cavity filter, a radio frequency board card, a main control board card, and the like, so as to ensure the requirements of a data wireless transmission distance and a data transmission bandwidth.
The power amplifier mainly realizes linear amplification of signals and mainly comprises a push-stage amplifier, a balanced power amplifier, a coupler and a radio frequency switch. The push stage adopts a push-pull structure, amplifies the transmitting signal and sends the amplified transmitting signal to the rear-stage power amplifier. The rear-stage power amplifier adopts a balanced structure, so that linear deterioration caused by the traction of the two-stage power amplifier is avoided. The signal is amplified by the power amplifier, passes through the coupler, and then passes through the switch and the cavity filter, and is radiated to the space by the antenna. When receiving signals, the radio frequency switch is switched to a receiving channel, and the signals are transmitted to the radio frequency board card through the switch and the primary LNA. The reserved coupling path is used for a predistortion feedback path of the power amplifier.
The main functions of the cavity filter are: the radio frequency signals sent by the radio frequency board card are amplified to a system air interface appointed level in a transmitting direction without distortion, and the radio frequency signals are sent to an antenna after being filtered out of band strays by a cavity filter; and receiving the radio frequency signals from the antenna on the system designated frequency point in the receiving direction, filtering out-of-band spurious by a cavity filter, amplifying the spurious signals by a low-noise amplifier and then transmitting the spurious signals to the radio frequency board card.
The radio frequency board card mainly completes the up-down conversion processing of TD-LTE signals. The transmitting channel receives the baseband data transmitted from the main control board card through Serdes, converts the baseband data into analog intermediate frequency through the DAC, and then transmits the analog intermediate frequency to the radio frequency channel, and the radio frequency channel converts the analog intermediate frequency to an air interface designated frequency point and then transmits the analog intermediate frequency to the power amplifier. The radio frequency board mainly down-converts the received radio frequency signals to analog intermediate frequency, the analog intermediate frequency is quantized through AD sampling and then is subjected to extraction filtering through a DDC unit in the FPGA unit, and extracted baseband data are transmitted to the main control unit through Serdes.
The main control board card mainly comprises a baseband processing module and a main control processing module, wherein the baseband processing module mainly completes data service, analysis of control signaling such as network access and networking, realizes interaction of air interface data and signaling, and is responsible for wireless resource management; the main control processing module is mainly used for managing and maintaining local equipment and also is responsible for completing functions such as gateway and routing processing.
In example 1, the inlet valve of the simulated canister was connected to a gas mask for recording the flow, pressure and positioning data of the gas stream generated during the test. After the simulation tank is started, the indicator lamp of the simulation tank is always on to indicate that the simulation tank is started and no positioning information is received, and the indicator lamp of the simulation tank flashes to indicate that the simulation tank is started and the positioning information is received. The simulation tank is internally provided with a wireless transmission antenna, and after the simulation tank is started, acquired data is automatically transmitted to a wireless base station in real time through wireless.
In embodiment 1, the comprehensive analysis management unit firstly performs track drawing on positioning data based on a two-dimensional map of a test site, displays the movement speed of a test person, counts movement mileage, speed extremum and movement time, then performs statistical analysis on body surface temperature data and heart rate data of the test person, displays a data peak value and an interval range, performs drawing on respiratory flow data and respiratory pressure data a smooth curve with periodic variation based on time, displays periodic characteristic analysis, and calculates respiratory vital capacity data of the test person.
Examples
Embodiment 2 provides a method of performing an assessment using the respiratory protection training assessment system of embodiment 1.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (9)

1. A respiratory protection training evaluation system, comprising: the system comprises an image acquisition device, a wireless base station and a comprehensive analysis management unit, wherein the image acquisition device is used for acquiring more than two sequence images of a test person on a test field and transmitting the images to the comprehensive analysis management unit through the base station;
the comprehensive analysis management unit comprises a display device, a first three-dimensional modeling generating device, a size obtaining device, a second three-dimensional modeling correcting device and a human body parameter determining device, wherein,
the display device comprises a display screen, wherein the display screen is used for displaying a first display area and a second display area which are divided by a limit frame, the first display area is positioned outside an area surrounded by the limit frame on the display screen and is used for displaying a first three-dimensional model obtained after the image acquired by the image acquisition device is subjected to first three-dimensional model generation processing; the second display area is positioned in the area surrounded by the limit frame on the display screen and is used for displaying a second three-dimensional model obtained after the second three-dimensional model correction processing is carried out on the first three-dimensional model in the limit frame;
the first three-dimensional modeling generating device is used for generating a first three-dimensional modeling according to the first sequence image and the second sequence image of the test person obtained by the image acquisition device;
the size acquisition device is used for acquiring size data of different semantic feature parts of the test staff;
a second three-dimensional modeling correction device for obtaining the point cloud of the test person according to the size data obtained by the size obtaining device, matching the first three-dimensional modeling in the limit frame with the point cloud of the test person, generating a corrected sequence second three-dimensional modeling corresponding to the actual shape of the test person, and
the human body parameter determining device comprises a respiratory frequency calculating unit which is used for determining the respiratory frequency of a test person according to the number of times of the expansion and contraction change of the second three-dimensional modeling in unit time,
the first three-dimensional modeling generating device includes:
the original image acquisition unit is used for acquiring a first sequence image and a second sequence image of the test person;
the size dividing unit is used for dividing each image of the first sequence image into a plurality of first unit images, and the average pixel difference value of each first unit image is smaller than or equal to a first threshold value;
the parallax map obtaining unit is used for determining a second unit map corresponding to the first unit map of one image in the second sequence image according to the first unit map of each first sequence image, and obtaining translation amounts between all the first unit maps and the second unit maps which are mutually corresponding to each other to obtain a parallax map;
the semantic segmentation unit is used for inputting the parallax images into a pre-trained neural network model to carry out semantic segmentation to obtain semantic segmentation images, and the semantic segmentation images are used for distinguishing the characteristics of different parts of a test person;
the first three-dimensional modeling reconstruction unit is used for reconstructing the semantic segmentation map to generate a first three-dimensional modeling, and the first three-dimensional modeling is used for describing the characteristics of different semantic parts of the test staff.
2. The respiratory protection training evaluation system of claim 1, wherein the three-dimensional contouring is chest, abdomen and mouth, and the semantic features are chest, abdomen and mouth.
3. The system of claim 2, wherein the bounding box is displayed on the display in a size that is adjustable in accordance with an instruction received by the display device to instruct the bounding box to adjust the size parameter.
4. A system according to claim 3, wherein the first and second sequence of images are two sequence of images obtained by different cameras at different viewing angles for the same scene in which the person under test is located.
5. The system of claim 4, wherein the second three-dimensional modeling correction device includes:
the to-be-corrected part characteristic obtaining unit is used for obtaining one or more part characteristics with different semantics in the limit frame according to the position relation between the limit frame and the first three-dimensional modeling for describing the part characteristics with different semantics;
a three-dimensional point cloud obtaining unit for obtaining three-dimensional point clouds of one or more different semantic feature parts corresponding to the one or more different semantic feature parts in the limit frame according to the size data of the one or more different semantic feature parts obtained by the size measuring device;
the second three-dimensional modeling reconstruction unit is used for matching one or more than two part features with different semantics in the limit frame with corresponding three-dimensional point clouds to generate a corrected second three-dimensional modeling corresponding to the actual shape of the part;
a scaling unit for scaling the second three-dimensional model according to the size of the bounding box, so that the matching degree of the second incision edge at the bounding box of the second three-dimensional model and the first incision edge at the bounding box of the first three-dimensional model is larger than or equal to a second threshold value;
and the stitching unit is used for stitching the first incision edge and the second incision edge to obtain a total three-dimensional modeling.
6. The system according to claim 5, wherein the comprehensive analysis management unit further comprises a reference person state determination unit including a CNN neural network and a self-competitive neural network, the CNN neural network acquiring the face sequence image from the first sequence image or the second sequence image of the reference person; the self-competitive neural network learns face images with different mental states into two-dimensional neurons of the self-competitive neural network in advance; when a test participant tests, the self-competition neural network determines the mental state of the test participant according to the face image of the test participant and the clustering condition of the two-dimensional neurons.
7. The system of any one of claims 1-6, wherein the test person wears a data acquisition device and a simulation tank, wherein the data acquisition device is configured to acquire body surface temperature data and heart rate data during a test of the test person in real time and transmit the data to the simulation tank; the simulation tank at least comprises a respiratory flow sensor, a respiratory pressure sensor, a positioning sensor, a near-field sensor, a transceiver unit and a processing device, wherein the respiratory flow sensor is used for collecting respiratory flow of a test person; the respiratory pressure sensor is used for collecting respiratory pressure of the test person, and the positioning sensor is used for collecting the position of the test person; the near-field sensor is used for receiving body surface temperature data and heart rate data in the test process of the test person, which are acquired by the wearing data acquisition equipment; the processing device is characterized in that the respiratory flow of the test staff, the respiratory pressure of the test staff and the position of the test staff are processed, the respiratory flow, the respiratory pressure and the position of the test staff are packed together with body surface temperature data and heart rate data to form a data frame, the data frame is sent to the receiving and transmitting unit, the receiving and transmitting unit processes the data frame, and then the data frame is forwarded to the comprehensive analysis management unit through the wireless base station.
8. The system according to claim 7, wherein the integrated analysis management unit firstly traces the positioning data based on a two-dimensional map of the test site, displays the movement speed of the test person, counts the movement mileage, the speed extremum and the movement time, then performs statistical analysis on the surface temperature data and the heart rate data of the test person, displays the data peak value and the interval range, plots the respiratory flow data and the respiratory pressure data on a smooth curve of the periodic variation based on time, displays the periodic characteristic analysis, and calculates the respiratory vital capacity data of the test person.
9. A method of assessment using a respiratory protection training assessment system according to any of claims 1-8.
CN202310055814.4A 2023-01-24 2023-01-24 Respiratory protection training evaluation system and method Active CN116129525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310055814.4A CN116129525B (en) 2023-01-24 2023-01-24 Respiratory protection training evaluation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310055814.4A CN116129525B (en) 2023-01-24 2023-01-24 Respiratory protection training evaluation system and method

Publications (2)

Publication Number Publication Date
CN116129525A CN116129525A (en) 2023-05-16
CN116129525B true CN116129525B (en) 2023-11-14

Family

ID=86300576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310055814.4A Active CN116129525B (en) 2023-01-24 2023-01-24 Respiratory protection training evaluation system and method

Country Status (1)

Country Link
CN (1) CN116129525B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017217298A (en) * 2016-06-09 2017-12-14 青木 広宙 Non-contact respiration measurement device and non-contact respiration measurement method
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN110111346A (en) * 2019-05-14 2019-08-09 西安电子科技大学 Remote sensing images semantic segmentation method based on parallax information
CN114596279A (en) * 2022-03-08 2022-06-07 江苏省人民医院(南京医科大学第一附属医院) Non-contact respiration detection method based on computer vision
CN114847931A (en) * 2022-03-25 2022-08-05 深圳市华屹医疗科技有限公司 Human motion tracking method, device and computer-readable storage medium
CN114973411A (en) * 2022-05-31 2022-08-30 华中师范大学 Self-adaptive evaluation method, system, equipment and storage medium for attitude motion
CN114947771A (en) * 2022-05-16 2022-08-30 三星电子(中国)研发中心 Human body characteristic data acquisition method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8792969B2 (en) * 2012-11-19 2014-07-29 Xerox Corporation Respiratory function estimation from a 2D monocular video
US11850026B2 (en) * 2020-06-24 2023-12-26 The Governing Council Of The University Of Toronto Remote portable vital signs monitoring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017217298A (en) * 2016-06-09 2017-12-14 青木 広宙 Non-contact respiration measurement device and non-contact respiration measurement method
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN110111346A (en) * 2019-05-14 2019-08-09 西安电子科技大学 Remote sensing images semantic segmentation method based on parallax information
CN114596279A (en) * 2022-03-08 2022-06-07 江苏省人民医院(南京医科大学第一附属医院) Non-contact respiration detection method based on computer vision
CN114847931A (en) * 2022-03-25 2022-08-05 深圳市华屹医疗科技有限公司 Human motion tracking method, device and computer-readable storage medium
CN114947771A (en) * 2022-05-16 2022-08-30 三星电子(中国)研发中心 Human body characteristic data acquisition method and device
CN114973411A (en) * 2022-05-31 2022-08-30 华中师范大学 Self-adaptive evaluation method, system, equipment and storage medium for attitude motion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Heartbeat Detection in Seismocardiograms with Semantic Segmentation;Konrad M. Duraj et al.;IEEE;第662-665页 *
Object segmentation using stereo images;An, P et al.;IEEE;第534-538页 *
人脸表情自动识别方法的研究;辛威;中国优秀硕士学位论文全文数据库 (信息科技辑)(第03期);第I138-533页 *
基于双目深度图像的自动扶梯乘客危险行为识别与预警系统;欧阳惠卿;舒文华;李行;李杨;;中国电梯(14);第36-39+42页 *
基于毫米波雷达的呼吸心跳检测和手势识别方法研究;闫化腾;中国优秀硕士学位论文全文数据库 (信息科技辑)(第04期);第I136-833页 *
基于深度图像的非接触式呼吸检测算法研究;陈永康;侯振杰;陈宸;梁久帧;苏海明;;计算机测量与控制(07);第218-222页 *
高晓蓉等.传感器技术 第3版.成都:西南交通大学出版社,2021,第295-296页. *

Also Published As

Publication number Publication date
CN116129525A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
CN111598900B (en) Image region segmentation model training method, segmentation method and device
CN110095062A (en) A kind of object volume measurement method of parameters, device and equipment
CN206756770U (en) Air quality monitoring system
CN111192321B (en) Target three-dimensional positioning method and device
CN109141248A (en) Pig weight measuring method and system based on image
CN104951808A (en) 3D (three-dimensional) sight direction estimation method for robot interaction object detection
CN106625673A (en) Narrow space assembly system and assembly method
CN204863476U (en) Bone surgery positioning system
CN108670301B (en) Transverse process positioning method for vertebral column based on ultrasonic image
US20210035326A1 (en) Human pose estimation system
CN113056228A (en) System and method for detecting physiological information using multi-modal sensors
US20230218168A1 (en) Intelligent system for search and rescue in special environment such as disaster
CN107260173A (en) A kind of breath measuring method based on camera Yu spherical label
CN116129525B (en) Respiratory protection training evaluation system and method
CN117558428B (en) Imaging optimization method and system for liver MRI
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN104809688B (en) Sheep body body measurement method and system based on affine transformation registration Algorithm
CN204169847U (en) The long-range respiration monitoring system of Wearable
CN116269749B (en) Laparoscopic bladder cancer surgical system with improved reserved nerves
CN111358431B (en) Identification method and equipment for esophagus pressure cloud picture
CN115965579B (en) Substation inspection three-dimensional defect identification and positioning method and system
CN110151186A (en) A kind of human body measurement method based on network-enabled intelligent terminal
CN110507285A (en) A kind of care device of dermatosis patient
CN110051356A (en) The acquisition methods and device of human body respiration status information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant