WO2023102880A1 - 气管插管影像的处理方法与系统以及气管插管的成效评量方法 - Google Patents

气管插管影像的处理方法与系统以及气管插管的成效评量方法 Download PDF

Info

Publication number
WO2023102880A1
WO2023102880A1 PCT/CN2021/137004 CN2021137004W WO2023102880A1 WO 2023102880 A1 WO2023102880 A1 WO 2023102880A1 CN 2021137004 W CN2021137004 W CN 2021137004W WO 2023102880 A1 WO2023102880 A1 WO 2023102880A1
Authority
WO
WIPO (PCT)
Prior art keywords
intubation
time
stage
structural
image
Prior art date
Application number
PCT/CN2021/137004
Other languages
English (en)
French (fr)
Inventor
曾稼志
吴俞桦
黄琨义
Original Assignee
曾稼志
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 曾稼志 filed Critical 曾稼志
Priority to PCT/CN2021/137004 priority Critical patent/WO2023102880A1/zh
Publication of WO2023102880A1 publication Critical patent/WO2023102880A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the invention relates to a processing method and system, in particular to a processing method and system for tracheal intubation images and a method for evaluating the effect of tracheal intubation.
  • Tracheal intubation is a common high-risk and high-tech medical behavior. Not only that, but tracheal intubation must be completed in a very short time, and if it cannot be completed within a few minutes, it may cause major organ damage. Limited to intubation tools and techniques, the current evaluation of intubation difficulty is only based on the operator's subjective judgment and clinical intubation results (time spent, times, etc.), or by specific image recognition, resulting in considerable differences in the evaluation of difficult intubation , it is impossible to integrate and communicate in research, and there is no objective evaluation method to measure the clinical teaching and training results and qualifications that all physicians need to have skills.
  • the present invention provides a processing method and system for tracheal intubation images, which can provide real-time segmentation of the whole process, serialized processing of intubation and integration time, and Analysis, and then establish a method for evaluating the effectiveness of tracheal intubation, which can solve the problem of subjective judgment and manual interpretation that are difficult to assist in real-time operation and training feedback at this stage.
  • a method for processing endotracheal intubation images includes: establishing a database including a plurality of structural objects, wherein the database stores at least one first intubation process image, and the at least one first intubation process image is These structural targets are defined in an image of an intubation process; object recognition of the second intubation process image is performed according to the defined structural targets, so as to obtain a plurality of objects identical to these structural targets in the second intubation process image.
  • Target objects and confirming the time points when these identified target objects appear in the second intubation process image, so as to obtain the stage intubation time and intubation time series of these target objects; wherein, these target objects are in The time points appearing in the second intubation process image define n time points, and the time difference between any two target objects is defined as the stage intubation time, and multiple stages of intubation time are cut and sutured according to time to establish the Intubation time series of these objects of interest for the second intubation procedure image.
  • the first intubation procedure image with a zero intubation difficulty scale is analyzed and modeled to define these structural targets.
  • the structural targets are selected from the group consisting of lips, epiglottis, pharynx, glottis, endotracheal tube, and endotracheal tube black line.
  • the processing method further includes: confirming the time points when these structural objects appear in the first intubation process image, so as to obtain the stage intubation time and intubation time series of these structural objects; wherein these structures
  • the time point when the target object appears in the first intubation process image defines n time points, and the time difference between any two structural objects is defined as the stage intubation time, and the multiple stage intubation time is cut and sutured according to time,
  • the intubation time series of these structural objects of the first intubation process images are established; and the intubation time series of the first intubation process images are drawn according to the stage intubation time and intubation time series of these structural objects Figure and timing analysis chart of intubation ability.
  • the processing method further includes: according to the stage intubation time and the intubation time series of the target objects, drawing the intubation time series diagram and the intubation ability time series analysis diagram of the second intubation process image .
  • a system for processing images of endotracheal intubation includes a database and an electronic device.
  • the database stores at least one first intubation process image, and a plurality of structural objects are defined by the at least one first intubation process image.
  • the electronic device is electrically connected to the database.
  • the electronic device includes one or more processing units and a storage unit.
  • the one or more processing units are electrically connected to the storage unit.
  • the storage unit stores one or more program instructions.
  • the one or more processing units When the one or more When a program instruction is executed by one or more processing units, the one or more processing units perform: object recognition of the second intubation process image according to the defined structural objects, so as to obtain the second intubation process image , a plurality of target objects identical to these structural objects; and confirming the time points at which these identified target objects appear in the second intubation process image, so as to obtain the stage intubation time and intubation time of these target objects Time series; wherein, the time points when these target objects appear in the second intubation process image define n time points, the time difference between any two target objects is defined as the stage intubation time, and are cut and stitched according to time Multiple stages of intubation time to establish the intubation time series of the target objects in the second intubation process image.
  • the database is located in a storage unit or a cloud device.
  • the one or more processing units further perform: confirming the time points at which these structural objects appear in the first intubation process image, so as to obtain the stage intubation time and the intubation time series of these structural objects ;
  • the time points when these structural objects appear in the first intubation process image define n time points, the time difference between any two structural objects is defined as the stage intubation time, and multiple stages are cut and sutured according to time Intubation time to establish the intubation time series of these structural objects of the first intubation process image; Time series diagram of intubation and time series analysis diagram of intubation ability.
  • the one or more processing units further perform: according to the phase intubation time and the intubation time series of the target objects, draw the intubation time series diagram and the intubation ability of the second intubation process image Timing analysis diagram.
  • a method for evaluating the effectiveness of tracheal intubation includes the above-mentioned processing method; and according to the stage intubation time and intubation time sequence of these target objects and these structural objects, perform Evaluation of endotracheal intubation outcomes with images of the second intubation procedure.
  • the evaluation of the effect of tracheal intubation includes: comparing the stage intubation time of the first intubation process image and the second intubation process image, the time series diagram of intubation and the time series analysis diagram of intubation ability, so as to evaluate The intubation effect at each stage of the second intubation process image.
  • the method for processing tracheal intubation images of the present invention includes: establishing a database including a plurality of structural objects, wherein the database stores at least one first intubation process image, and the at least one first intubation process image is These structural objects are defined in the image of the intubation process; the target identification of the second intubation process image is performed according to the defined structural objects, so as to obtain multiple target objects identical to these structural objects in the second intubation process image objects; and confirm the time points when these identified target objects appear in the second intubation process image, so as to obtain the stage intubation time and intubation time series of these target objects; wherein, these target objects are in the second intubation process image
  • the time points appearing in the image of the intubation process define n time points, and the time difference between any two target objects is defined as the stage intubation time, and multiple stages of intubation time are cut and sutured according to time to establish the second Steps such as intubation time series of these target objects in the intub
  • the proposed processing method and system for tracheal intubation images can provide the stage intubation time and intubation time of the whole process of tracheal intubation images. endotracheal tube time series, and then provide real-time segmentation stage intubation and serialized processing and analysis of integration time, so as to establish the effectiveness evaluation of endotracheal intubation, which can solve the problem of difficult real-time auxiliary operation and training feedback of subjective judgment and manual interpretation at this stage question.
  • FIG. 1A is a schematic functional block diagram of a system for processing endotracheal intubation images according to an embodiment of the present invention.
  • FIG. 1B is a schematic flow diagram of a processing method for an endotracheal intubation image according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of another process step of the method for processing tracheal intubation images of the present invention.
  • FIG. 3A and FIG. 3B are the intubation time series diagram and the intubation ability time series analysis diagram of the first intubation process image according to the embodiment of the present invention, respectively.
  • FIG. 4A and FIG. 4B are the intubation time series diagram and the intubation ability time series analysis diagram of the second intubation process image according to the embodiment of the present invention, respectively.
  • FIG. 5 is a schematic flow chart of a method for evaluating the effectiveness of an endotracheal intubation according to an embodiment of the present invention.
  • the analysis cases of tracheal intubation included in this article are all accepted according to the principles adopted by the human experiment committee (IRB NCKUH B-ER-107-088).
  • the processing system presented herein may also be referred to as an analysis system, and the processing method may also be referred to as an analysis method.
  • the images of the first intubation process and the second intubation process in this article are only for distinction, and they are all the whole process images recorded during the tracheal intubation process.
  • FIG. 1A is a functional block diagram of a system for processing endotracheal intubation images according to an embodiment of the present invention
  • FIG. 1B is a schematic flow chart of a method for processing endotracheal intubation images according to an embodiment of the present invention.
  • the endotracheal intubation image processing system 1 of this embodiment includes a database 11 and an electronic device 12 .
  • the database 11 can store at least one first intubation process image, and a plurality of structural objects can be defined from the at least one first intubation process image. Specifically, in order to perform analysis and processing to establish evaluation models and standards for the effectiveness of endotracheal intubation, it is necessary to establish a database 11 including multiple structural objects, and these structural objects are determined by at least one first intubation process. defined in the images; preferably, the structural objects are defined in the plurality of first intubation procedure images.
  • the patient is expected to undergo general anesthesia with endotracheal intubation and after signing the consent form, the whole process of intubation images will be recorded in the database 11, and the information used for analysis is selected from the database 11, and It is a case of successful intubation by the attending physician in the anesthesiology department. All patients were intubated with the Trachway Blade, but not limited to the Trachway Blade. Perform intubation difficulty scale (Intubation Difficulty Scale, IDS) analysis and intubation time assessment after intubation to establish the image recognition of the basic structure or characteristics of successful intubation, and integrate and analyze the variation of process intervals and automate it.
  • IDS Intubation Difficulty Scale
  • the images of the first intubation process with the intubation difficulty scale being zero are analyzed and established to define these structural objects.
  • these structural objects For example but not limited to lip (Lip), epiglottis (Epiglottis), pharynx (Laryngopharynx), glottis (Glottis), endotracheal tube (Endotracheal Tube), endotracheal tube black marking line (Endotracheal Tube marked black line, inner tube)
  • the structural targets defined by the endotracheal intubation image can be different, and the number can also be greater than or less than 6.
  • the user can define the number and structure different from the intubation process image according to the effectiveness evaluation needs.
  • 33 whole-process image cases of the intubation process may be selected from the database 11 storing images of the first intubation process, and screened by multiple senior specialists to decompose the intubation process into time series At different stages, mark and confirm the defined structural objects in the image, for example, filter out the lip: 27 images, epiglottis: 173 images, pharynx: 366 images, glottis: 377 images, endotracheal tube: 24 images , Endotracheal tube black marking line: 345 sheets, etc., a total of 6 images of structural objects appearing according to time series, these images of structural objects can be used to train the object recognition model based on YOLOv3 (Real-Time Object Detection), It is used for subsequent target recognition for other intubation images.
  • YOLOv3 Real-Time Object Detection
  • the image file of the first intubation process can be cut into a single image file of 30fps, and then the object detection of the image can be performed, wherein any two objects can be combined into one stage Intubation time, so the intubation image can be clipped into the intubation time series, thereby analyzing the time series of each structural object in the endotracheal intubation process.
  • a database 11 including images of target structures and important steps in the process of endotracheal intubation is established first, and artificial intelligence (AI) can learn according to the images stored in the database 11 to construct an automatic An AI recognition system that recognizes the above-mentioned related structures.
  • AI artificial intelligence
  • the images of the entire intubation process are deconstructed in stages according to the structural objects identified by time, so as to identify the corresponding time points and operational meanings of each structural object in the image sequence, and then obtain the staged interpolation The intubation time series after cutting and suturing the intubation time and intubation time in these stages.
  • the image of the intubation process can define n structural objects (n ⁇ 2) according to the time series
  • the n structural objects correspond to n different time points in the image respectively
  • the adjacent Or the time (point) difference between any two structural objects that are not adjacent can be defined as a stage intubation time.
  • Each intubation stage has its time and operational significance, and then according to The intubation time of the whole intubation process can be obtained by time-series cutting the intubation time in the suture stage.
  • the structural objects in the order of appearance in time that is, the first structural object to the sixth structural object can deconstruct at least 5 stages corresponding to the intubation time (for example, t1 ⁇ t5). Intubation phase.
  • the stage intubation time t1 corresponds to between the first structural object and the second structural object
  • the stage intubation time t2 corresponds to the time between the second structural object and the third structural object
  • the stage intubation time t3 Corresponding to between the third structural object and the fourth structural object
  • the stage intubation time t4 corresponds to between the fourth structural object and the fifth structural object
  • the stage intubation time t5 corresponds to the fifth structural object and
  • the sixth structural object there are a total of 5 intubation stages, and the intubation time of these 6 structural objects can be obtained by cutting and suturing the intubation time of these five stages (t1, t2, t3, t4, t5) in sequence sequence.
  • the intubation time series of these six structural objects can also be obtained, but they correspond to the first Between the structural object and the third structural object, between the third structural object and the fourth structural object, between the fourth structural object and the sixth structural object.
  • the 28 intubation stages corresponding to the intubation time of the stage, and then cutting and suturing the intubation time of some stages in chronological order, can obtain the intubation time series of the whole intubation process, and so on.
  • the image labeling tool is used: LabelImg marking software is the labeling tool of the subject matter, and an expert meeting is held according to the structure and characteristics of the subject matter, to select the structural subject matter, to give each structural subject matter definition principles, test verification and Correction, and then define the above 6 structural objects, and then use these 6 structural objects to train artificial intelligence. After repeated verification and correction, the subsequent objects of other intubation images can be automatically recognized by the AI system. And the accuracy rate can be almost perfect, and then achieve the automatic effect evaluation of endotracheal intubation images.
  • the electronic device 12 is electrically connected to the database 11 .
  • the electronic device 12 can be a computer, a server (server), a mobile phone or a tablet, without limitation.
  • the electrical connection between the electronic device 12 and the database 11 can be wireless or wired, and the wireless connection is, for example, connected through a Wi-Fi module, a Bluetooth module or a mobile network (3G, 4G or 5G).
  • the data stored in the database 11 is received, stored and processed.
  • the electronic device 12 may include one or more processing units 121 and a storage unit 122 , and the one or more processing units 121 are electrically connected to the storage unit 122 .
  • 1A is an example of a processing unit 121 and a storage unit 122, and the aforementioned database 11 can be located in the storage unit 122 or a cloud device; or, the database 11 is also located in an independent computer-readable storage medium (such as but not limited to SSD, USB, or any type of memory) or memory chips.
  • the database 11 is located in the cloud device, the electronic device 12 needs to download the data stored in the database 11 from the cloud device to the storage unit 122 before processing and analyzing, and then the processing unit 121 performs processing and analysis. If the database 11 is located in the storage unit 122, no downloading step is required.
  • the database 11 is located in an independent computer-readable storage medium, the stored content can be read by the processing unit 121 as long as the electronic device is inserted.
  • the processing unit 121 can access data stored in the storage unit 122 , and can include core control components of the electronic device 12 , such as at least one central processing unit (CPU) and memory, or other control hardware, software, or firmware.
  • the storage unit 122 can be a non-transitory computer readable storage medium (non-transitory computer readable storage medium), for example, can include at least one memory, memory card, memory chip, optical disk, video tape, computer tape, or any combination thereof .
  • the aforementioned memory may include a read-only memory (ROM), a flash (Flash) memory, a Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), or a solid-state hard drive (Solid State Disk, SSD) , or other forms of memory, or a combination thereof.
  • ROM read-only memory
  • flash flash
  • FPGA Field-Programmable Gate Array
  • SSD Solid State Disk
  • the storage unit 122 can store at least one application software, and the application software can include one or more program instructions 1221. After the above-mentioned database 11 is established, and when the one or more program instructions 1221 of the application software stored in the storage unit 122 are When the one or more processing units 121 are executed, the one or more processing units 121 can at least perform: object recognition of the second intubation process image according to the defined structural objects, so as to obtain the second intubation process image Among them, a plurality of target objects identical to these structural objects (step S02 of FIG.
  • the stage intubation time and intubation time series can define n time points, and the time difference between any two target objects is defined as the stage Intubation time (the images of the second intubation process can be obtained in total equal to or greater than (n-1) stages of intubation time), and the intubation time of multiple stages is cut and sutured according to time to establish the second intubation process image
  • the intubation time series of these target objects (step S03 of FIG. 1B ).
  • the one or more processing units 121 can further perform: according to the stage intubation time and intubation time series of these target objects, draw the intubation time series diagram of the second intubation process image and the intubation ability time series analysis Figure (step S04 of Figure 2).
  • the one or more processing units 121 may further perform: confirm the time points when these structural objects appear in the first intubation process image, so as to obtain the stage intubation time and intubation time series of these structural objects, wherein
  • the time points at which these structural objects appear in the first intubation process images can define n time points, and the time difference between any two structural objects is defined as the stage intubation time (a total of the first intubation process images can be obtained equal to or greater than (n-1) stages of intubation time), and cut and suture multiple stages of intubation time according to time, to establish the intubation time series of these structural objects in the first intubation process image (Figure 2 Step S05); and according to the stage intubation time and intubation time series of these structural objects, draw the intubation time series diagram and the intubation ability time series analysis diagram of the first intubation process image (step S06 in FIG. 2 ).
  • the method for processing an endotracheal tube image of the present invention may include steps S01 to S03 .
  • Step S01 is: establishing a database 11 including a plurality of structural objects, wherein the database 11 stores at least one first intubation process image, and these structural objects are defined by the at least one first intubation process image.
  • the database 11 stores at least one first intubation process image, and these structural objects are defined by the at least one first intubation process image.
  • at least one structural target of the first intubation process image (preferably multiple) is confirmed and defined, so as to establish a subsequent identification reference.
  • a plurality of first intubation process images whose intubation difficulty scale is zero are analyzed and a model is established to define these structural objects.
  • these structural targets in this embodiment appear in time sequence, for example, but not limited to, 6 target structures including the above-mentioned lips, epiglottis, pharynx, glottis, endotracheal tube, and black marking line of the endotracheal tube (not limited to 6 ), and the 6 structural objects, the first intubation process image, the intubation time at each stage and the intubation time series can all be stored in the database 11 .
  • Step S02 is: performing object recognition in the second intubation process image according to the defined structural objects, so as to obtain a plurality of target objects identical to the structural objects in the second intubation process image.
  • the images of the second intubation process and the identified targets can also be stored in the database 11 .
  • the second intubation process image it is first necessary to identify the target object of the intubation process on the second intubation process image, and the second intubation process image
  • the identified target objects must be the same as the structural objects in the first intubation process image, so that the effectiveness evaluation of each stage of the intubation process can be performed on the same basis.
  • each stage can represent a different operational definition and meaning, and can be evaluated or improved independently.
  • the AI system that has been trained above can be used to perform target recognition on subsequent images of the intubation process, and then obtain multiple target targets (six or more, but with the same structure as the structural target) When comparing, the same number of objects and the same stage should be used for comparison).
  • the second intubation process image is, for example, an image of the endotracheal intubation process obtained by other doctors (such as but not limited to PGY interns) learning the actual operation of endotracheal intubation in the anesthesia department.
  • Step S03 is: confirming the time points when these identified target objects appear in the second intubation process image, so as to obtain the stage intubation time and intubation time series of these target objects;
  • the time points appearing in the image of the second intubation process define n time points, and the time difference between any two target objects is defined as the stage intubation time, and multiple stages of intubation time are cut and sutured according to time to establish the second Intubation time series of these objects of interest for two intubation procedure images.
  • the second intubation process image is also a time-series image of the intubation process
  • various target objects such as lips, epiglottis, pharynx, glottis, endotracheal tube and black marking line of endotracheal tube, etc., but not limited to (i) time points appearing sequentially in the second intubation process image, as described above, and then the stage intubation time of these target objects can be obtained, and the intubation time sequence synthesized according to these stage intubation times can be obtained, for subsequent evaluation.
  • the time points at which these target objects appear sequentially in the second intubation process image can define, for example, 6 time points, and the time difference between any two target objects can be defined as a stage of intubation time
  • the second intubation process image can obtain at least 5 stages of intubation time (corresponding to 5 stages), and can obtain at most 15 stages of intubation time (corresponding to 15 stages), and can be based on the time when the target object appears
  • phase intubation times (and intubation phases) are sequentially clipped and combined to create an intubation time sequence of images of the second intubation procedure.
  • (t2-t1) is the stage intubation time from the lip to the epiglottis
  • (t3-t2) is the stage intubation time from the epiglottis to the throat stage
  • (t6-t5) is the black color from the endotracheal tube to the endotracheal tube
  • (t3-t1) is the stage intubation time from the throat to the lip stage
  • (t5-t2) is the stage intubation time from the epiglottis to the endotracheal tube stage
  • (t6-t1) is the stage intubation time from the lip to the black marked stage of the endotracheal tube
  • the stage intubation time is 7 seconds
  • the time difference between two target objects (or two structural objects) is not limited to two adjacent target objects (or two structural objects) in time series, and can also be for non-adjacent Two target objects (or two structural objects) are used to calculate the stage intubation time to generate the stage intubation time corresponding to the intubation stage, and then perform the effectiveness evaluation of the intubation stage.
  • the time difference (t6-t4) from the appearance of the (non-adjacent) glottis (time point t4) to the disappearance of the black marking line of the endotracheal tube (time point t6) is calculated to obtain the stage intubation time of this intubation stage (t6-t4), and then evaluate the effectiveness of this intubation stage, and the other intubation stages can also be performed in the same way.
  • FIG. 2 it is a schematic diagram of another process step of the method for processing endotracheal intubation images of the present invention.
  • the processing method may further include steps S04 to S06 .
  • the chronological sequence of step S04 and step S05 (and step S06) is not limited, step S05 (and step S06) can be performed after step S04; or step S04 can be performed after step S05 (and step S06); Alternatively, step S04 and step S05 (and step S06 ) are performed simultaneously.
  • step S05 is: confirm the time points when these structural objects appear in the first intubation process image, so as to obtain the stage intubation time and intubation time series of these structural objects;
  • the time points appearing in the tube process image define n time points, and the time difference between any two structural objects is defined as the stage intubation time, and multiple stages of intubation time are cut and sutured according to time to establish the first intubation time.
  • the process of obtaining the stage intubation time and intubation time series of these target objects is the same as the second intubation process image of the above step S03 , the same method can be used to obtain, for example, 6 time points corresponding to each structural object appearing in the first intubation process image, and equal to or greater than, for example, five stages of intubation time (up to, for example, 15), and then according to time These phase cannulation times are trimmed and stitched to create a cannulation time series of these structural targets for the first cannulation procedure image.
  • step S06 is: according to the stage intubation time and the intubation time series of these structural objects, draw the intubation time series diagram and the intubation ability time series analysis diagram of the first intubation process image.
  • three senior anesthesiologists can jointly mark the above six structural objects and their corresponding time of appearance Sequence points, establish the intubation time series of multiple stages of intubation and the whole procedure of standard airway intubation, as a follow-up effect comparison.
  • the results refer to the intubation time series diagram of the first intubation process image shown in FIG. 3A , and the intubation ability time series analysis diagram of the first intubation process image shown in FIG. 3B .
  • step S04 is: according to the stage intubation time and the intubation time series of these target objects, draw the intubation time series diagram and the intubation ability time series analysis diagram of the second intubation process image.
  • step S03 according to the 6 target objects identified in the second intubation process image obtained by the tracheal intubation performed by the PGY intern and their corresponding intubation time series (step S03), it can be drawn as shown in Fig.
  • the abscissa is time (seconds)
  • A is the last lip image (Lip last image)
  • B is the first epiglottis image (Epiglottis 1st image)
  • C is the first laryngeal image (1st larynx image)
  • D is the last image of the glottis (Last free glottis) without a tube
  • E is the image of the last black line (Last blackline)
  • different stages of endotracheal intubation images can be obtained from Figure 3A and Figure 4A time spent. It is worth noting that not all the targets (and their corresponding time points) need to appear in the intubation time series diagram, and one or some targets may not appear in the intubation time series diagram depending on the user's evaluation method. in the figure.
  • each stage can represent different operational definitions and meanings, and can be evaluated or improved independently, for example, only for a certain intubation stage, or for all intubation stages to evaluate the effectiveness separately; of course, in In different embodiments, two adjacent targets of 6 (n) targets can also be calculated separately to obtain 5 stages of intubation (5 stages of intubation time); or, it can be performed according to the needs of users Adjust to produce more than 5 phase intubation times, then evaluate phase intubation times between one object of interest and another (e.g.
  • the present invention does not limit the effect evaluation, even only for a certain intubation stage.
  • FIG. 5 is a schematic flow chart of a method for evaluating the effectiveness of an endotracheal intubation according to an embodiment of the present invention.
  • the present invention also proposes a method for evaluating the effectiveness of endotracheal intubation, which can be applied to the above-mentioned image processing system 1 and method for endotracheal intubation.
  • the processing system 1 and method for tracheal intubation images have been described in detail above, and will not be further described here.
  • the effectiveness evaluation method for tracheal intubation of the present invention can be used for automatic evaluation of endotracheal intubation, and may include the above-mentioned treatment method (or step) for endotracheal intubation and the effectiveness evaluation step.
  • the processing steps (or method) of endotracheal intubation include steps S01 to S06, which have been described in detail above and will not be further described here.
  • the effect evaluation step is: according to the intubation time series of these target objects and these structural objects, perform the evaluation of the effect of tracheal intubation on the second intubation process image.
  • the evaluation of the effectiveness of tracheal intubation in the second intubation process image includes step S07 in Figure 5: comparing the stage intubation time and intubation time series diagram of the first intubation process image and the second intubation process image And a time-series analysis chart of the intubation ability to evaluate the intubation effect at each stage of the second intubation process image.
  • stage intubation time, intubation time series diagram, and intubation ability time series analysis diagram of the first intubation process image are still taken as examples in the above-mentioned Figure 3A and Figure 3B, while the stage intubation time of the second intubation process image
  • the tube time, intubation time series diagram, and intubation ability time series analysis diagram still take the above-mentioned Fig. 4A and Fig. 4B as an example.
  • stage B to C is the process and time to provoke the epiglottis and expose the laryngeal structure (stage intubation time is about 2.5 seconds)
  • stage C to D is to adjust the blade and apply appropriate force to expose the best The structure of the glottis, as well as the process and time of guiding the endotracheal tube to the larynx (stage intubation time is about 4 seconds)
  • the stages from D to E are the process and time from sliding the endotracheal tube from the larynx into the trachea to positioning (stage The intubation time is about 4.5 seconds), so the total time of endotracheal intubation in stages A to E (difference in intubation time between stages) is about 14 seconds.
  • the meanings represented by A to E are as described above.
  • the intubation time of stage A to B is about 6 seconds
  • the intubation time of stage B to C is about 19 seconds
  • the intubation time of stage C to D is about 30 seconds
  • the intubation time of stage D to E is about 32 seconds
  • the total time for endotracheal intubation from stages A to E is about 87 seconds. Comparing Fig. 4A with Fig.
  • the intubation images at different times can draw different groups of lines, a group of lines constitutes a quadrilateral, and each group of lines Represents one intubation procedure. It can also be seen from Figure 4B that most of these quadrangular lines fall in area Z (the gray area in Figure 4B is the area where the intubation capacity is lower than expected, for example, the tracheal intubation capacity is lower than 75%), indicating that the PGY intern The initial intubation technique was insufficient and needs to be strengthened.
  • the image processing method for endotracheal intubation and the evaluation method for evaluating the effectiveness of endotracheal intubation including the processing method of the present invention can identify the time points at which the identified targets appear and their mutual relationship.
  • the time interval which corresponds to the operation time spent in each stage of the airway intubation process, is used as an evaluation index for the effectiveness of airway intubation, and can also be used as a target for risk factors in the exploration stage.
  • the present invention can also be used to analyze the discrimination or learning curve when performing intubation with different tools, different levels of personnel, and different difficult intubation scores.
  • the characteristic of the present invention is that in the past, the image analysis of tracheal intubation mainly focused on the larynx (Cormack Grade), and the influence of other parts could not be distinguished.
  • the most important thing is that the present invention has been calibrated by multiple anatomical structures (target objects) , developed a time-series tracheal intubation processing system and method, which can deconstruct the traditional concept of only overall success or failure in each attempt, decompose the intubation process into different stages according to the target object and time, and automatically identify the target object to Separate the different stages and their corresponding time spent, and then establish a stage-by-stage constructive evaluation model of tracheal intubation.
  • target objects anatomical structures
  • the present invention can develop the processing of artificial intelligence tracheal intubation images, and provide real-time time serial analysis and objective quantitative evaluation of the whole process of images.
  • the present invention establishes an innovative automatic evaluation system for tracheal intubation effectiveness, which solves the problem of difficult real-time auxiliary operation and training feedback for subjective judgment and manual interpretation at the present stage.
  • the present system and method will be used for the evaluation and development of future personal learning process technology for endotracheal intubation, the development and verification of new intubation tools or techniques, and the scoring of lesson plans for simulation training.
  • the method for processing tracheal intubation images of the present invention includes: establishing a database including a plurality of structural objects, wherein the database stores at least one first intubation process image, and the at least one first intubation process image is These structural objects are defined in the image of the intubation process; the target identification of the second intubation process image is performed according to the defined structural objects, so as to obtain multiple target objects identical to these structural objects in the second intubation process image objects; and confirm the time points when these identified objects appear in the second intubation image, so as to obtain the stage intubation time and intubation time series of these objects; wherein, these target objects are in the second intubation process
  • the time points appearing in the image define n time points, the time difference between any two target objects is defined as the stage intubation time, and multiple stages of intubation time are cut and sutured according to time to establish the second intubation process Imaging of these target objects in time series of intubation steps.
  • the proposed processing method and system for tracheal intubation images can provide the stage intubation time and intubation time of the whole process of tracheal intubation images. endotracheal intubation time series, and then provide real-time segmentation stage intubation and serialized processing and analysis of integration time, so as to establish the effectiveness evaluation of endotracheal intubation, which can solve the problem of difficult real-time auxiliary operation and training feedback of subjective judgment and manual interpretation at the present stage question.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

气管插管影像的处理系统(1)以及方法,其中气管插管影像处理方法包括:先建立包含气管插管过程中标的结构与重要步骤的影像的数据库(11),同时依据数据库(11)中的影像来执行机器学习,建构自动辨识标的结构的系统(1)。其中,插管影像中的任两个标的结构之间的时间差定义为阶段插管时间,依据时间裁剪缝合多个阶段插管时间以建立插管时间序列,以上作为气管插管成效评量办法并建立气管插管分阶段的成效评量模式。

Description

气管插管影像的处理方法与系统以及气管插管的成效评量方法 技术领域
本发明涉及一种处理方法与系统,特别涉及一种气管插管影像的处理方法与系统以及气管插管的成效评量方法。
背景技术
气管插管(Tracheal intubation)为常见高风险、高技术的医疗行为,不仅如此,气管插管必须在极短时间内完成,数分钟内无法完成即可能造成重大器官伤害。限于插管工具和技术,目前评估插管困难度仅依操作者主观判断与临床插管结果(所耗时间、次数等)来判定,或依特定影像辨识,造成困难插管评估的差异相当大,无法在研究上进行整合沟通,也无客观评估方式来衡量此项所有医师都需要具备技能的临床教学训练成果和合格与否。
近年影像辅助插管工具的多元与临床使用上已普及,但是实际临床发现,过去使用的评估插管困难度与插管方式并不完全能够直接复制且应用于影像辅助插管的技术上。现有的做法会利用影像喉头镜(Video Laryngoscope)或影像式插管通条(Video Stylet)在插管过程中同步进行摄影,但是这些做法所录制的影像仅供后续观看学习,或分析局部结构差异,目前仍没有针对插管过程影像进行系统化结构与时间序列的分析,进而作为插管困难度分析应用或插管教学训练的成果评估。
发明内容
本发明根据高风险性、高技术性气管插管临床操作及训练需求,提供一种气管插管影像的处理方法与系统,可提供全程影像的实时分割阶段插管和整合时间的串行化处理以及分析,进而建立一种气管插管的成效评量方法,可以解决现阶段主观判断和人工判读的难以实时辅助操作与训练回馈的问题。
为达上述目的,依据本发明的一种气管插管影像的处理方法,其包括:建立包括多个结构标的物的数据库,其中数据库储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像定义出这些结构标的物;依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物;以及确认辨识出的这些目标标的物在第二插管过程影像中出现的时间点,以得到这些目标标的物的阶段插管时间 以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列。
在一个实施例中,是以插管难度量表为零的第一插管过程影像进行分析以及建立模式,以定义出这些结构标的物。
在一个实施例中,这些结构标的物选自唇、会厌、咽喉、声门、气管内管和气管内管黑色标线所构成的群组。
在一个实施例中,该处理方法进一步包括:确认这些结构标的物在第一插管过程影像中出现的时间点,以得到这些结构标的物的阶段插管时间以及插管时间序列;其中这些结构标的物在第一插管过程影像中出现的时间点定义出n个时间点,任两个结构标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第一插管过程影像的这些结构标的物的插管时间序列;以及依据这些结构标的物的阶段插管时间以及插管时间序列,绘示第一插管过程影像的插管时间序列图以及插管能力时序分析图。
在一个实施例中,该处理方法进一步包括:依据该些目标标的物的阶段插管时间以及插管时间序列,绘示第二插管过程影像的插管时间序列图及插管能力时序分析图。
为达上述目的,依据本发明的一种气管插管影像的处理系统,其包括数据库和电子装置。数据库储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像定义出多个结构标的物。电子装置与数据库电性连接,电子装置包括一个或多个处理单元和存储单元,该一个或多个处理单元与存储单元电性连接,存储单元储存一个或多个程序指令,当该一个或多个程序指令被一个或多个处理单元执行时,该一个或多个处理单元进行:依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物;以及确认辨识出的这些目标标的物在第二插管过程影像中出现的时间点,以得到这些目标标的物的阶段插管时间以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列。
在一个实施例中,数据库位于存储单元或云端装置中。
在一个实施例中,该一个或多个处理单元进一步进行:确认这些结构标的物在第一插管过程影像中出现的时间点,以得到这些结构标的物的阶段插管时间以及插管时间序列;其中这些结构标的物在第一插管过程影像中出现的时间点定义出n个时间点,任两个结构标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第一插管过程影像的这些结构标的物的插管时间序列;以及依据这些结构标的物的阶段插管时间以及插管时间序列,绘示第一插管过程影像的插管时间序列图以及插管能力时序分析图。
在一个实施例中,该一个或多个处理单元进一步进行:依据这些目标标的物的阶段插管时间以及插管时间序列,绘示第二插管过程影像的插管时间序列图以及插管能力时序分析图。
为达上述目的,依据本发明的一种气管插管的成效评量方法,其包括上述的处理方法;以及依据这些目标标的物和这些结构标的物的阶段插管时间以及插管时间序列,进行第二插管过程影像的气管插管成效评量。
在一个实施例中,该气管插管成效评量包括:比较第一插管过程影像与第二插管过程影像的阶段插管时间、插管时间序列图以及插管能力时序分析图,以评估第二插管过程影像各阶段的插管成效。
综上所述,在本发明的气管插管影像的处理方法中,包括:建立包括多个结构标的物的数据库,其中数据库储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像定义出这些结构标的物;依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物;以及确认辨识出的这些目标标的物在第二插管过程影像中出现的时间点,以得到这些目标标的物的阶段插管时间以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列等步骤。由此,本发明根据高风险性、高技术性的气管插管临床操作和训练需求,所提出的气管插管影像的处理方法与系统,可以提供气管插管的全程影像的阶段插管时间和插管时间序列,进而提供实时分割阶段插管和整合时间的串行化处理以及分析,从而建立气管插管的成效评量,可解决现阶段主观判断和人工判读的难以实时辅助操作与训练回馈的问题。
附图说明
图1A为本发明实施例的一种气管插管影像的处理系统的功能方块示意图。
图1B为本发明实施例的一种气管插管影像的处理方法的流程步骤示意图。
图2为本发明的气管插管影像的处理方法的另一流程步骤示意图。
图3A和图3B分别为本发明实施例的第一插管过程影像的插管时间序列图和插管能力时序分析图。
图4A和图4B分别为本发明实施例的第二插管过程影像的插管时间序列图和插管能力时序分析图。
图5为本发明实施例的气管插管的成效评量方法的流程步骤示意图。
具体实施方式
以下将参照相关附图,说明根据本发明实施例的气管插管影像的处理方法与系统以及气管插管的成效评量方法,其中相同的组件将以相同的附图标记加以说明。
本文中所收录气管插管分析案例都依人体试验委员会通过原则进行收案(IRB NCKUH B-ER-107-088)。另外,本文中出现的处理系统也可称为分析系统,处理方法也可称为分析方法。此外,本文中出现的第一插管过程影像、第二插管过程影像只是为了区别,其都是气管插管过程中所录制的全程影像。
图1A为本发明实施例的一种气管插管影像的处理系统的功能方块示意图,而图1B为本发明实施例的一种气管插管影像的处理方法的流程步骤示意图。
参照图1A和图1B所示,本实施例的气管插管影像的处理系统1包括数据库11和电子装置12。
数据库11可储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像中可定义出多个结构标的物。具体来说,为了进行分析、处理,以建立气管插管成效的评量模式与标准,需先建立包括多个结构标的物的数据库11,而这些结构标的物是由至少一个第一插管过程影像中所定义出的;优选地,这些结构标的物是由多个第一插管过程影像中所定义出的。
在本实施例中,病患预计接受气管插管全身麻醉的手术并且在签署完成同意书后,插管过程所录制的全程影像会收入数据库11,而分析所用的信息是由数据库11中挑选,并且是麻醉科主治医师插管成功的案例,所有患者都以航道叶片(Trachway Blade)进行插管,但不限于Trachway Blade,同时以叶片前端预置的摄影镜头全程录制整个插管过程,并在插管后执行插管难度量表(Intubation Difficulty Scale,IDS)分析以及插管时间的评估,以建立成功插管基 本结构或特色的影像辨识,同时整合分析流程间隔的变异并使其自动化。
本实施例是在数据库11中,是以插管难度量表为零(即容易插管案例)的第一插管过程影像进行分析以及建立模式,以定义出这些结构标的物,这些结构标的物例如但不限于选自唇(Lip)、会厌(Epiglottis)、咽喉(Laryngopharynx)、声门(Glottis)、气管内管(Endotracheal Tube)、气管内管黑色标线(Endotracheal Tube marked black line,内管共有例如两条环状黑线)所构成的任意群组,但不限于此。在本实施例中,这些结构标的物共6个,即包括唇、会厌、咽喉、声门、气管内管和气管内管黑色标线等所构成的群组,然并不以此为限,在不同实施例中,气管插管影像所定义出的结构标的物可以不同,数量也可以大于或小于6个,使用者可根据成效评估需求,由插管过程影像中定义出数量和结构不同于上述的结构标的物。
在一些实施例中,可以从储存第一插管过程影像的数据库11中挑选出例如33个插管过程的全程影像案例,通过多位资深专科医师筛选,以将插管过程根据时间序列解构成不同阶段,并在影像中标注、确认所定义的各结构标的物,总计例如筛选出包含唇:27张、会厌:173张、咽喉:366张、声门:377张、气管内管:24张、气管内管黑色标线:345张等,共例如6种根据时间序出现的结构标的物的影像,这些结构标的物影像可用来训练基于YOLOv3(Real-Time Object Detection)的标的物辨识模型,以供后续针对其他插管影像的标的物辨识使用。在一些实施例中,可先将第一插管过程影像的档案切割为30fps的单张图文件,再进行影像标的物的辨识(Object Detection),其中,任意两个标的物可组合成一个阶段插管时间,因此可将插管影像剪裁成插管时间序列,由此解析气管插管流程中,各结构标的物的时序。
换句话说,是先建立包含气管插管处理过程中,标的结构与重要步骤的影像的数据库11,同时根据数据库11中储存的影像让人工智能(Artificial Intelligence,AI)进行学习,建构出可以自动辨识上述相关结构的AI辨识系统。在本文中,是将整个插管过程的影像,根据时间辨识出的结构标的物分阶段解构,由此辨识出各结构标的物在影像序列中出现的对应时间点和操作含义,进而得到阶段插管时间以及这些阶段插管时间经裁剪缝合后的插管时间序列。举例来说,假设插管过程影像可定义出根据时间序列进行的n个结构标的物(n≥2),这n个结构标的物分别依序对应影像中的n个不同时间点,则相邻或不相邻的任两个结构标的物的时间(点)差可定义为一个阶段插管时间。
以n=6为例,由于任两个结构标的物可定义出一个阶段插管时间,故整个 插管过程依时间顺序最少可解构出5个(6-1=5)阶段插管时间所对应的5个插管阶段,最多可解构出15个(6*5/2=15)阶段插管时间所对应的15个插管阶段,各插管阶段有其时间和操作上的意义,再依时间序裁剪缝合阶段插管时间,可得到整个插管过程的插管时间序列。其中,当n=6时,依时间出现顺序的结构标的物:即第一结构标的物至第六结构标的物最少可解构出5个阶段插管时间(例如t1~t5)所对应的5个插管阶段。在此,阶段插管时间t1对应于第一结构标的物与第二结构标的物之间,阶段插管时间t2对应于第二结构标的物与第三结构标的物之间,阶段插管时间t3对应于第三结构标的物与第四结构标的物之间,阶段插管时间t4对应于第四结构标的物与第五结构标的物之间,阶段插管时间t5对应于第五结构标的物与第六结构标的物之间,共计5个插管阶段,将这五个阶段插管时间(t1、t2、t3、t4、t5)依序裁剪缝合可得到这6个结构标的物的插管时间序列。如果将例如3个阶段插管时间:(t1+t2)、t3、(t4+t5)依序裁剪缝合,也可得到这6个结构标的物的插管时间序列,只是其分别对应至第一结构标的物与第三结构标的物之间、第三结构标的物与第四结构标的物之间、第四结构标的物与第六结构标的物之间。在不同的实施例中,如果n=8,则整个插管过程最少可解构出7个(8-1=7)阶段插管时间所对应的7个插管阶段,最多可解构出28个(8*7/2=28)阶段插管时间所对应的28个插管阶段,再依时间顺序裁剪缝合某些阶段插管时间,可得到整个插管过程的插管时间序列,以此类推。
在本实施例中,是使用影像标注工具:LabelImg标记软件为标的物的标注工具,根据标的物的结构和特性等召开专家会议,选取结构标的物、赋予各结构标的物定义原则、测试验证和修正,进而定义出上述的6个结构标的物,再利用这6个结构标的物来训练人工智能,经重复验证和修正后,使后续其他插管影像的标的物可以自动被AI系统辨识出,且准确率几乎可以达到完美,进而达到对气管插管影像的自动化成效评量。
为了让AI辨识的准确率可以达到完美,必须对AI进行重复验证和修正步骤,以验证标的物的辨识准确度。在本文中,除了利用上述定义出这些结构标的物的多个第一插管过程影像让AI系统进行测试(验证)外,进一步使用其他的插管过程影像(IDS=0或IDS≠0都可)进行测试(验证),且持续地训练并修正AI辨识能力,进而使AI系统对标的物的辨识准确率可以达到完美。
另外,值得说明的是,在气管插管的过程中,各解剖结构的插管影像在时间上的意义如下。唇:插管流程起始点,唇消失表示摄影镜头已经进入口腔; 会厌:摄影镜头进入口腔经过舌头,已正确到达舌根部,对初学者来说,代表能将叶片滑入舌根,基本动作正确,未偏离中线太多;叶片在正中央会影响会厌移动范围;声门:从看见会厌以后,调整叶片位置和力道以及角度得到最佳声门开启视野;喉头(Larynx)上视角:喉头上视角能否快速见到声门为困难插管在麻醉医师心中的指标,但过去的插管影像无法特别标记这个时间;杓状连合(Arytenoid commissure,AC):插管过程中最早展露的咽喉结构,其形状和食道开口不同,为分辨食道和气管开口重要解剖结构;气管内管前端:视野中出现气管内管,代表已经可以把气管内管由开口送到咽喉;气管内管中段:气管内管对准咽喉,确认可以顺利控制气管内管到达喉头;气管内管黑色标线:黑色标线如果消失,代表插管已到定位、完成插管程序。
再参照图1A所示,电子装置12与数据库11电性连接。电子装置12可为计算机、服务器(server)、手机或平板,并不限制。在一些实施例中,电子装置12与数据库11的电性连接可为无线或有线方式连接,无线方式连接例如通过Wi-Fi模块、蓝牙模块或行动网络(3G、4G或5G)连接,由此接收、储存以及处理数据库11中所储存的数据。其中,电子装置12可以包括一个或多个处理单元121和存储单元122,一个或多个处理单元121与存储单元122电性连接。图1A是以一个处理单元121与一个存储单元122为例,而前述的数据库11可位于存储单元122或云端装置中;或者,数据库11也位于独立的计算机可读取储存介质(例如但不限于固态硬盘、USB、或任何型式的内存)或记忆芯片中。当数据库11位于云端装置时,电子装置12在进行处理、分析之前,需先由云端装置中下载数据库11储存的数据至存储单元122,再由处理单元121进行处理、分析。若数据库11位于存储单元122中,则不需要进行下载步骤。当数据库11位于独立的计算机可读取储存介质时,只要插入电子装置即可由处理单元121读取其储存的内容。
处理单元121可存取存储单元122所储存的数据,并可包含电子装置12的核心控制组件,例如可包含至少一个中央处理器(CPU)和内存,或包含其它控制硬件、软件或固件。另外,存储单元122可为非瞬时计算机可读取储存介质(non-transitory computer readable storage medium),例如可包含至少一个内存、记忆卡、记忆芯片、光盘片、录像带、计算机磁带,或其任意组合。在一些实施例中,前述的内存可包含只读存储器(ROM)、快闪(Flash)内存、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、或固态硬盘(Solid State Disk,SSD)、或其他形式的内存,或其组合。
存储单元122可储存至少一个应用软件,该应用软件可包含一个或多个程序指令1221,当建立上述的数据库11之后,并且当存储单元122储存的应用软件的该一个或多个程序指令1221被该一个或多个处理单元121执行时,该一个或多个处理单元121可至少进行:依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物(图1B的步骤S02);以及确认辨识出的这些目标标的物在第二插管过程影像中出现的时间点,以得到这些目标标的物的阶段插管时间以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点可定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间(第二插管过程影像共可得到等于或大于(n-1)个阶段插管时间),并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列(图1B的步骤S03)。
另外,该一个或多个处理单元121进一步可进行:依据这些目标标的物的阶段插管时间以及插管时间序列,绘示第二插管过程影像的插管时间序列图以及插管能力时序分析图(图2的步骤S04)。此外,该一个或多个处理单元121进一步可进行:确认这些结构标的物在第一插管过程影像中出现的时间点,以得到这些结构标的物的阶段插管时间以及插管时间序列,其中这些结构标的物在第一插管过程影像中出现的时间点可定义出n个时间点,任两个结构标的物之间的时间差定义为阶段插管时间(第一插管过程影像共可得到等于或大于(n-1)个阶段插管时间),并且依时间裁剪缝合多个阶段插管时间,以建立出第一插管过程影像的这些结构标的物的插管时间序列(图2的步骤S05);以及依据这些结构标的物的阶段插管时间以及插管时间序列,绘示第一插管过程影像的插管时间序列图以及插管能力时序分析图(图2的步骤S06)。
以下详细说明上述的步骤S02至步骤S06。
如图1B所示,本发明的气管插管影像的处理方法可包括步骤S01至步骤S03。
步骤S01为:建立包括多个结构标的物的数据库11,其中数据库11储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像定义出这些结构标的物。如前所述,先确认且定义出至少一个第一插管过程影像(优选为多个)的结构标的物,以此建立后续辨识基准。在本实施例中,是以插管难度量表为零的多个第一插管过程影像进行分析以及建立模式,以定义出这些结构标的物。另外,本实施例的这些结构标的物依时间序出现例如但不限于包括上述 的唇、会厌、咽喉、声门、气管内管和气管内管黑色标线等共6个标的结构(不限于6个),且这6个结构标的物与第一插管过程影像、各阶段插管时间以及插管时间序列都可储存在数据库11中。
步骤S02为:依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物。其中,第二插管过程影像和辨识出的这些目标标的物也可储存在数据库11中。具体来说,为了评估后续的插管影像(即第二插管过程影像)的插管成效,需先对第二插管过程影像进行插管过程的标的物辨识,而且第二插管过程影像所辨识出的这些目标标的物与第一插管过程影像的这些结构标的物必须相同,如此,才能以相同的基准进行插管过程的各阶段成效评估。在本文中,每一阶段都可代表不同操作定义和含义,可以独立进行评估或改善。在一些实施例中,可利用上述训练完成的AI系统对后续的插管过程影像进行标的物辨识,进而得到与结构标的物相同结构的多个目标标的物(同样是6个或更多,但比较时需以相同数量的标的物和相同阶段进行比较)。在本实施例中,第二插管过程影像例如是其他医生(例如但不限于PGY实习医师)在麻醉部门学习气管插管实际操作所得到的气管插管过程影像。
步骤S03为:确认辨识出的这些目标标的物在第二插管过程影像中出现的时间点,以得到这些目标标的物的阶段插管时间以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列。由于第二插管过程影像也是插管过程的时间序列影像,因此,可得到各目标标的物(例如唇、会厌、咽喉、声门、气管内管和气管内管黑色标线等,但不限于此)依序出现在第二插管过程影像中的时间点,如上所述,进而可得到这些目标标的物的阶段插管时间,以及依这些阶段插管时间剪裁缝合成的插管时间序列,以供后续评估使用。
在本实施例中,这些目标标的物在第二插管过程影像中依序出现的时间点可定义出例如6个时间点,任意两个目标标的物之间的时间差可定义为一个阶段插管时间,则第二插管过程影像最少可以得到5个阶段插管时间(对应5个阶段),最多可得到15个阶段插管时间(对应15个阶段),并且可以依目标标的物出现的时间顺序剪裁、组合这些阶段插管时间(以及插管阶段)以建立出第二插管过程影像的插管时间序列。换言之,假设唇、会厌、咽喉、声门、气 管内管和气管内管黑色标线等6(n=6)个目标标的物在影像中依序出现的时间点为t1、t2、…、t6,则(t2-t1)为唇到会厌阶段的阶段插管时间,(t3-t2)为会厌到咽喉阶段的阶段插管时间,…,(t6-t5)为气管内管到气管内管黑色标线阶段的阶段插管时间;另外,(t3-t1)为咽喉到唇阶段的阶段插管时间,(t5-t2)为会厌到气管内管阶段的阶段插管时间,(t6-t1)为唇到气管内管黑色标线阶段的阶段插管时间,以此类推,共可得到等于或大于5个阶段的阶段插管时间(本实施例最多为15个)。
举例来说,如果第一个出现的会厌影像(Glottis 1st image)与第一个出现的声门影像(Epiglottis 1st image)相差7秒,表示会厌到声门的阶段所花费的时间为7秒(即阶段插管时间为7秒),以此类推。值得注意的是,两个目标标的物(或两个结构标的物)之间的时间差不限于时间序列上相邻的两个目标标的物(或两个结构标的物),也可对不相邻的两个目标标的物(或两个结构标的物)进行阶段插管时间的计算,以产生对应于该插管阶段的阶段插管时间,进而进行该插管阶段的成效评估,例如可以单独或同时计算(不相邻的)声门(时间点为t4)出现到气管内管黑色标线消失(时间点为t6)的时间差(t6-t4),而得到该插管阶段的阶段插管时间(t6-t4),之后再对该插管阶段进行成效评估,其他的插管阶段也可以相同的做法进行。
参照图2所示,其为本发明的气管插管影像的处理方法的另一流程步骤示意图。在图2中,除了上述的步骤S01至步骤S03外,该处理方法进一步可包括步骤S04至步骤S06。在本文中,步骤S04与步骤S05(以及步骤S06)的时间顺序不限,可进行步骤S04之后再进行步骤S05(以及步骤S06);或进行步骤S05(以及步骤S06)之后再进行步骤S04;又或者,步骤S04和步骤S05(以及步骤S06)同时进行。
以下先说明步骤S05和步骤S06后,再说明步骤S04。其中,步骤S05为:确认这些结构标的物在第一插管过程影像中出现的时间点,以得到这些结构标的物的阶段插管时间以及插管时间序列;其中这些结构标的物在第一插管过程影像中出现的时间点定义出n个时间点,任两个结构标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第一插管过程影像的这些结构标的物的插管时间序列。同样地,由于第一插管过程影像也是插管过程的时间序列影像,因此,同上述步骤S03的第二插管过程影像的这些目标标的物的阶段插管时间和插管时间序列的得到过程,可利用同样的方法得到各结构标的物出现在第一插管过程影像中对应的例如6个时间点、以及 等于或大于例如5个阶段插管时间(最多例如为15个),进而依时间剪裁缝合这些阶段插管时间,以建立出第一插管过程影像的这些结构标的物的插管时间序列。
而步骤S06为:依据这些结构标的物的阶段插管时间以及插管时间序列,绘示第一插管过程影像的插管时间序列图以及插管能力时序分析图。在本实施例中,可依据上述数据库11中,IDS等于0的(第一)插管图像文件案,由三位资深麻醉科主治医师共同标注出上述6个结构标的物及其出现的对应时间序列点,建立多个阶段插管和标准气道插管全程序的插管时间序列,作为后续的成效比对。其结果参考图3A所示的第一插管过程影像的插管时间序列图,以及图3B所示的第一插管过程影像的插管能力时序分析图。
此外,步骤S04为:依据这些目标标的物的阶段插管时间以及插管时间序列,绘示第二插管过程影像的插管时间序列图以及插管能力时序分析图。同样地,根据例如PGY实习医师进行的气管插管所得到的第二插管过程影像中辨识出的6个目标标的物及其对应的插管时间序列(步骤S03),可绘示出如图4A所示的第二(新手)插管过程影像的阶段插管时间以及插管时间序列图,以及图4B所示的第二(新手)插管过程影像的插管能力时序分析图。
在图3A和图4A中,横坐标为时间(秒),A为最后一个唇影像(Lip last image),B为第一个会厌影像(Epiglottis 1st image),C为第一个喉头影像(1st larynx image),D为最后一个没有管子的声门(Last free glottis)影像,E为最后一条黑线(Last blackline)影像,由图3A和图4A中可以得到不同的气管插管影像的各阶段所花费的时间。值得注意的是,不一定全部的标的物(及其对应时间点)都需出现在插管时间序列图中,可视使用者评估做法的不同而使某一个或某些个标的物不出现该图中。
在图3A和图4A的实施例中,6个标的物但只绘示出5个时间点和4个插管阶段(4个阶段插管时间为:A到B、B到C、C到D、D到E阶段),每一阶段都可代表不同操作定义和含义,可以独立进行评估或改善,例如只对某一个插管阶段、或对所有插管阶段都分别进行成效评估;当然,在不同的实施例中,6(n)个标的物的相邻两个标的物也可分别计算,以得到5个插管阶段(5个阶段插管时间);或者,可依据使用者的需求进行调整,以产生大于5个的阶段插管时间,再评估某一个目标标的物与另一个目标标的物之间的阶段插管时间(例如上述的A到B、B到C、C到D、D到E等插管阶段,或是整合B到D的插管阶段、A到D的插管阶段,…)及其成效,或对所有的阶段插管都分 别解构其操作意义并且各阶段单独进行成效评估,甚至只对某一个插管阶段进行成效评估,本发明都不限制。
图5为本发明实施例的气管插管的成效评量方法的流程步骤示意图。
本发明还提出一种气管插管的成效评量方法,可应用于上述的气管插管影像的处理系统1和方法。气管插管影像的处理系统1和方法已在上述中详述,在此不再多作说明。本发明的气管插管的成效评量方法可用以对气管插管进行自动化评量,并可包括上述的气管插管的处理方法(或步骤)和成效评量步骤。
如图5所示,气管插管的处理步骤(或方法)包括步骤S01至步骤S06,步骤S01至步骤S06已在上述中详细说明,在此不再多作说明。另外,该成效评量步骤为:依据这些目标标的物和这些结构标的物的插管时间序列,进行第二插管过程影像的气管插管成效评量。在本文中,第二插管过程影像的气管插管成效评量包括图5中的步骤S07:比较第一插管过程影像与第二插管过程影像的阶段插管时间、插管时间序列图以及插管能力时序分析图,以评估第二插管过程影像各阶段的插管成效。在本文中,第一插管过程影像的阶段插管时间、插管时间序列图以及插管能力时序分析图仍以上述的图3A和图3B为例,而第二插管过程影像的阶段插管时间、插管时间序列图以及插管能力时序分析图仍以上述的图4A和图4B为例。
再参照图3A的第一插管过程影像的阶段插管时间以及插管时间序列图,其中,A到B的阶段代表插管的航道叶片能正确进入口腔到达舌根以下的过程和时间(阶段插管时间约3秒),B到C的阶段为能挑动会厌且露出喉头结构的过程和时间(阶段插管时间约2.5秒),C到D的阶段为能调整叶片和适当施力露出最佳声门结构,以及导引气管内管至喉头的过程和时间(阶段插管时间约4秒),D到E的阶段为能将气管内管由喉头滑入气管到定位的过程和时间(阶段插管时间约4.5秒),故A到E阶段的气管插管全部时间(阶段插管时间差)约14秒。另外,由图3B的第一插管过程影像的阶段插管时间以及插管能力时序分析图中可以看出,资深麻醉科主治医师的气管插管能力都落在75%~100%之间(即落在图3B显示的灰色区域内),表示插管技术相当不错。
在图4A的第二(新手)插管过程影像的阶段插管时间以及插管时间序列图中,A~E所代表的意义如上所述。在图4A中,A到B的阶段插管时间约6秒,B到C的阶段插管时间约19秒,C到D的阶段插管时间约30秒,D到E的阶段插管时间约32秒,故A到E阶段的气管插管全部时间约87秒。图4A与图3A相比,明显地可以看出,该PGY实习医师的插管过程所花费的时间相当长, 特别是C到D的阶段、D到E的阶段所花的时间很久,表示C到E阶段的气管插管的技术能力相当不足。
另外,在图4B的第二(新手)插管过程影像的插管能力时序分析图中,不同时间的插管影像可绘示出不同组的线条,一组线条构成一个四边形,每一组线条代表一次的插管过程。由图4B中也可看出,这些四边形线条大都落在区域Z(图4B的灰色区域,为插管能力低于预期的区域,例如气管插管能力低于75%),表示该PGY实习医师初期的插管技术实力不足,有待加强。另外,相对于图3B来说,图4B中的第一个会厌(Epiglottis)~第一个声门(Glottis)的阶段能力也不足,该PGY实习医师的插管过程中出现断断续续的情况(如图4A的C到D、D到E阶段),显见C到E的阶段技术不够纯熟,此外,该PGY实习医师也有零星在其他时序点能力不足的情况。通过上述的分析、比较,除了可对该PGY实习医师的气管插管技术能力进行评估外,还可指出其能力不足阶段以及后续需要加强训练之处,由此达到气管插管成效的自动化评量目的。
综上,由上述公开内容可知,本发明的气管插管影像的处理方法以及包括该处理方法的气管插管的成效评量方法,可将辨识出的标的物出现的时间点及其相互间的时间间隔,对应于气道插管过程中各阶段所耗费的操作时间,做为气道插管的成效评量指标,也可以做为探索阶段危险因子的标的。通过与标准化的气道插管时序做对照,将可用来了解新手医师的学习成效,提供各阶段进一步细部的学习回馈。此外,本发明也可用于分析不同工具、不同层级人员以及不同困难插管分数的执行插管时的鉴别度或学习曲线。
本发明的特色在于:以往对气管插管的影像分析主要着重在喉部(Cormack Grade),未能辨别其他部分的影响,最重要的是,本发明经过多个解剖结构特色标定(标的物),发展出时间序列的气管插管的处理系统和方法,可以解构每次尝试只有整体成功或失败的传统概念,将插管过程依标的物和时间解构成不同阶段,并通过自动辨识标的物以分割出不同阶段及其对应花费时间,进而建立气管插管分阶段的建构式成效评量模式。
本发明可根据高风险性、高技术性气管插管临床操作和训练需求,发展人工智能气管插管影像的处理,提供全程影像实时时间串行化分析和客观量化评估。另外,本发明还基于上述的处理(分析)方法,建立一个创新的气管插管成效自动化评量系统,解决现阶段主观判断和人工判读的难以实时辅助操作与训练回馈的问题。此外,本系统与方法将可对未来气管插管个人学习过程技术的评估和发展,新插管工具或技术的开发和验证,以及模拟训练的教案评分。
综上所述,在本发明的气管插管影像的处理方法中,包括:建立包括多个结构标的物的数据库,其中数据库储存至少一个第一插管过程影像,并且由该至少一个第一插管过程影像定义出这些结构标的物;依据定义的这些结构标的物进行第二插管过程影像的标的物辨识,以得到第二插管过程影像中,与这些结构标的物相同的多个目标标的物;以及确认辨识出的这些标的物在第二插管影像中出现的时间点,以得到这些标的物的阶段插管时间以及插管时间序列;其中,这些目标标的物在第二插管过程影像中出现的时间点定义出n个时间点,任两个目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出第二插管过程影像的这些目标标的物的插管时间序列等步骤。由此,本发明根据高风险性、高技术性的气管插管临床操作和训练需求,所提出的气管插管影像的处理方法与系统,可以提供气管插管的全程影像的阶段插管时间以及插管时间序列,进而提供实时分割阶段插管和整合时间的串行化处理和分析,从而建立气管插管的成效评量,可解决现阶段主观判断和人工判读的难以实时辅助操作与训练回馈的问题。
以上所述仅是举例性的,而非限制性的。任何未脱离本发明的精神与范畴,而对其进行的等效修改或变更,均应包含在后附的权利要求中。
符号说明
1:气管插管影像的处理系统
11:数据库
12:电子装置
121:处理单元
122:存储单元
1221:程序指令
A:最后一个唇影像
B:第一个会厌影像
C:第一个喉头影像
D:最后一个没有管子的声门影像
E:最后一条黑线影像
S01、S02、S03、S04、S05、S06、S07:步骤
Z:区域

Claims (13)

  1. 一种气管插管影像的处理方法,其包括:
    建立包括多个结构标的物的数据库,其中所述数据库储存至少一个第一插管过程影像,并且由所述至少一个第一插管过程影像定义出所述结构标的物;
    依据定义的所述结构标的物进行第二插管过程影像的标的物辨识,以得到所述第二插管过程影像中,与所述结构标的物相同的多个目标标的物;以及
    确认辨识出的所述目标标的物在所述第二插管过程影像中出现的时间点,以得到所述目标标的物的阶段插管时间以及插管时间序列;
    其中,所述目标标的物在所述第二插管过程影像中出现的时间点定义出n个时间点,任两个所述目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出所述第二插管过程影像的所述目标标的物的插管时间序列。
  2. 根据权利要求1所述的处理方法,其中,是以插管难度量表为零的所述第一插管过程影像进行分析以及建立模式,以定义出所述结构标的物。
  3. 根据权利要求1所述的处理方法,其中所述结构标的物选自唇、会厌、咽喉、声门、气管内管和气管内管黑色标线所构成的群组。
  4. 根据权利要求1所述的处理方法,其进一步包括:
    确认所述结构标的物在所述第一插管过程影像中出现的时间点,以得到所述结构标的物的阶段插管时间以及插管时间序列;其中所述结构标的物在所述第一插管过程影像中出现的时间点定义出n个时间点,任两个所述结构标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出所述第一插管过程影像的所述结构标的物的插管时间序列;以及
    依据所述结构标的物的阶段插管时间以及插管时间序列,绘示所述第一插管过程影像的插管时间序列图以及插管能力时序分析图。
  5. 根据权利要求4所述的处理方法,其进一步包括:
    依据所述目标标的物的阶段插管时间以及插管时间序列,绘示所述第二插管过程影像的插管时间序列图以及插管能力时序分析图。
  6. 一种气管插管影像的处理系统,其包括:
    数据库,其储存至少一个第一插管过程影像,并且由所述至少一个第一插管过程影像定义出多个结构标的物;以及
    电子装置,其与所述数据库电性连接,所述电子装置包括一个或多个处理单元和存储单元,所述一个或多个处理单元与所述存储单元电性连接,所述存储单元储存一个或多个程序指令,当所述一个或多个程序指令被所述一个或多个处理单元执行时,所述一个或多个处理单元进行:
    依据定义的所述结构标的物进行第二插管过程影像的标的物辨识,以得到所述第二插管过程影像中,与所述结构标的物相同的多个目标标的物;以及
    确认辨识出的所述目标标的物在所述第二插管过程影像中出现的时间点,以得到所述目标标的物的阶段插管时间以及插管时间序列;
    其中,所述目标标的物在所述第二插管过程影像中出现的时间点定义出n个时间点,任两个所述目标标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出所述第二插管过程影像的所述目标标的物的插管时间序列。
  7. 根据权利要求6所述的处理系统,其中所述数据库位于所述存储单元或云端装置中。
  8. 根据权利要求6所述的处理系统,其中,是以插管难度量表为零的所述第一插管过程影像进行分析以及建立模式,以定义出所述结构标的物。
  9. 根据权利要求6所述的处理系统,其中所述结构标的物选自唇、会厌、咽喉、声门、气管内管和气管内管黑色标线所构成的群组。
  10. 根据权利要求6所述的处理系统,其中所述一个或多个处理单元进一步进行:
    确认所述结构标的物在所述第一插管过程影像中出现的时间点,以得到所述结构标的物的阶段插管时间以及插管时间序列;其中所述结构标的物在所述第一插管过程影像中出现的时间点定义出n个时间点,任两个所述结构标的物之间的时间差定义为阶段插管时间,并且依时间裁剪缝合多个阶段插管时间,以建立出所述第一插管过程影像的所述结构标的物的插管时间序列;以及
    依据所述结构标的物的阶段插管时间以及插管时间序列,绘示所述第一插管过程影像的插管时间序列图以及插管能力时序分析图。
  11. 根据权利要求6所述的处理系统,其中所述一个或多个处理单元进一步进行:
    依据所述目标标的物的阶段插管时间以及插管时间序列,绘示所述第二插管过程影像的插管时间序列图以及插管能力时序分析图。
  12. 一种气管插管的成效评量方法,其包括:
    根据权利要求5所述的处理方法;以及
    依据所述目标标的物和所述结构标的物的阶段插管时间以及插管时间序列,进行所述第二插管过程影像的气管插管成效评量。
  13. 根据权利要求12所述的评估方法,其中所述气管插管成效评量包括:
    比较所述第一插管过程影像与所述第二插管过程影像的阶段插管时间、插管时间序列图以及插管能力时序分析图,以评估所述第二插管过程影像各阶段的插管成效。
PCT/CN2021/137004 2021-12-10 2021-12-10 气管插管影像的处理方法与系统以及气管插管的成效评量方法 WO2023102880A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/137004 WO2023102880A1 (zh) 2021-12-10 2021-12-10 气管插管影像的处理方法与系统以及气管插管的成效评量方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/137004 WO2023102880A1 (zh) 2021-12-10 2021-12-10 气管插管影像的处理方法与系统以及气管插管的成效评量方法

Publications (1)

Publication Number Publication Date
WO2023102880A1 true WO2023102880A1 (zh) 2023-06-15

Family

ID=86729433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137004 WO2023102880A1 (zh) 2021-12-10 2021-12-10 气管插管影像的处理方法与系统以及气管插管的成效评量方法

Country Status (1)

Country Link
WO (1) WO2023102880A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104483321A (zh) * 2014-10-31 2015-04-01 苏州捷碧医疗科技有限公司 一种基于机器视觉的注射针管自动检测系统和检测方法
TW201915941A (zh) * 2017-09-29 2019-04-16 樂達創意科技有限公司 影像特徵自動辨識裝置、系統及方法
US20190298951A1 (en) * 2018-03-28 2019-10-03 Nihon Kohden Corporation Intubation apparatus
JP2020062218A (ja) * 2018-10-17 2020-04-23 学校法人日本大学 学習装置、推定装置、学習方法、推定方法、およびプログラム
CN112652393A (zh) * 2020-12-31 2021-04-13 山东大学齐鲁医院 基于深度学习的ercp质控方法、系统、存储介质及设备
WO2021206518A1 (ko) * 2020-04-10 2021-10-14 (주)휴톰 수술 후 수술과정 분석 방법 및 시스템
CN113573654A (zh) * 2019-02-28 2021-10-29 美国尤太克产品公司 用于检测并测定病灶尺寸的ai系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104483321A (zh) * 2014-10-31 2015-04-01 苏州捷碧医疗科技有限公司 一种基于机器视觉的注射针管自动检测系统和检测方法
TW201915941A (zh) * 2017-09-29 2019-04-16 樂達創意科技有限公司 影像特徵自動辨識裝置、系統及方法
US20190298951A1 (en) * 2018-03-28 2019-10-03 Nihon Kohden Corporation Intubation apparatus
JP2020062218A (ja) * 2018-10-17 2020-04-23 学校法人日本大学 学習装置、推定装置、学習方法、推定方法、およびプログラム
CN113573654A (zh) * 2019-02-28 2021-10-29 美国尤太克产品公司 用于检测并测定病灶尺寸的ai系统
WO2021206518A1 (ko) * 2020-04-10 2021-10-14 (주)휴톰 수술 후 수술과정 분석 방법 및 시스템
CN112652393A (zh) * 2020-12-31 2021-04-13 山东大学齐鲁医院 基于深度学习的ercp质控方法、系统、存储介质及设备

Similar Documents

Publication Publication Date Title
US8588496B2 (en) Medical image display apparatus, medical image display method and program
CN107368859A (zh) 病变识别模型的训练方法、验证方法和病变图像识别装置
CN111144191B (zh) 字体识别方法、装置、电子设备及存储介质
US11138726B2 (en) Method, client, server and system for detecting tongue image, and tongue imager
CN112370018B (zh) 一种预测困难气道的计算机应用软件及气道管理数据系统
KR102240485B1 (ko) 신청자의 심리 상태를 분석하기 위한 영상 이미지 통합 분석 시스템 및 그 분석 방법
CN109858809A (zh) 基于课堂学生行为分析的学习质量评估方法和系统
CN112151155A (zh) 基于人工智能的超声影像智能培训方法、系统及应用系统
CN115381429A (zh) 基于人工智能的气道评估终端
WO2023102880A1 (zh) 气管插管影像的处理方法与系统以及气管插管的成效评量方法
TWI792761B (zh) 氣管插管影像的處理方法與系統、以及氣管插管的成效評量方法
CN107818707B (zh) 一种自动出题的考试系统
CN115526842A (zh) 鼻咽喉镜监测方法、装置、系统、计算机设备和存储介质
CN111368929B (zh) 一种图片的标注方法
CN114121208A (zh) 一种基于可视化资料的手术记录质量控制方法
CN113782146A (zh) 基于人工智能的全科用药推荐方法、装置、设备及介质
CN117173491B (zh) 医学图像的标注方法、装置、电子设备及存储介质
CN112614103A (zh) 基于深度学习的气管镜图像特征比对标记系统及方法
CN112086193A (zh) 一种基于物联网的人脸识别健康预测系统及方法
EP4276847A1 (en) Defining a timestamp for a target medical event
Hisey Computer-assisted workflow recognition for central venous catheterization
CN114037647A (zh) 胃镜图像处理方法、系统、设备及可读存储介质
CN115620053B (zh) 气道类型确定系统及电子设备
CN115331292B (zh) 基于面部图像的情绪识别方法、装置及计算机储存介质
CN117671774B (zh) 一种人脸情绪智能识别分析设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21966802

Country of ref document: EP

Kind code of ref document: A1