CN117017232A - Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence - Google Patents

Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence Download PDF

Info

Publication number
CN117017232A
CN117017232A CN202311284397.7A CN202311284397A CN117017232A CN 117017232 A CN117017232 A CN 117017232A CN 202311284397 A CN202311284397 A CN 202311284397A CN 117017232 A CN117017232 A CN 117017232A
Authority
CN
China
Prior art keywords
image
gray
intrinsic fluorescence
lesion
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311284397.7A
Other languages
Chinese (zh)
Inventor
黄鹏
马军
白雨薇
祁磊
姜威
张天宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ushio Medical Technology Suzhou Co ltd
Original Assignee
Ushio Medical Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ushio Medical Technology Suzhou Co ltd filed Critical Ushio Medical Technology Suzhou Co ltd
Priority to CN202311284397.7A priority Critical patent/CN117017232A/en
Publication of CN117017232A publication Critical patent/CN117017232A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The application relates to the technical field of intrinsic fluorescence detection, and discloses an auxiliary diagnosis system, medium and equipment combining AR and intrinsic fluorescence, wherein the auxiliary diagnosis system comprises the following components: the intrinsic fluorescence detection module generates excitation light to irradiate a designated part of a patient and acquires an intrinsic fluorescence image generated after the designated part of the patient is irradiated; the diagnosis module is connected with the intrinsic fluorescence detection module, receives the intrinsic fluorescence image and recognizes lesions in the intrinsic fluorescence image to obtain diagnosis result information; the AR module is connected with the diagnosis module and receives diagnosis result information, and the diagnosis result information is displayed on a specified part of a real patient in a superimposed mode by using an AR technology. The application can assist an operator to detect the lesions according to the inherent fluorescence, and improves the diagnosis efficiency and effect.

Description

Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence
Technical Field
The application relates to the technical field of intrinsic fluorescence detection, in particular to an auxiliary diagnosis system, medium and equipment combining AR and intrinsic fluorescence.
Background
Tissue intrinsic fluorescence is also called autofluorescence, and is simply referred to as intrinsic fluorescence, which refers to fluorescence emitted by an autofluorescent substance (such as collagen, enzyme protein, tryptophan, porphyrin, etc.) of human tissue excited by irradiation of the human tissue with excitation light of a certain wavelength without any exogenous fluorescent substance. The identification of human tissue lesions can be realized by collecting and detecting the intrinsic fluorescence. Intrinsic fluorescence technology was originally born in the forty of the twentieth century, but is limited by the state of the art of the electronics industry and the theoretical studies of related disciplines, and until the eighties were primarily applied to the clinical field, commercialization of intrinsic fluorescence technology for lesion detection was the goal in recent years.
In the prior art, intrinsic fluorescence is applied to the examination of various parts of a human body, but most of the existing examination methods only simply collect the intrinsic fluorescence, and judge whether a lesion exists or not manually, so that the condition of misdiagnosis of missed diagnosis exists, and an operator needs to observe various images and parameters while operating during the examination, so that the operation is inconvenient and the examination efficiency is low.
Disclosure of Invention
Therefore, the technical problem to be solved by the application is to overcome the defects in the prior art, and provide an auxiliary diagnosis system, medium and equipment combining AR and intrinsic fluorescence, which can assist an operator to detect lesions according to the intrinsic fluorescence and improve diagnosis efficiency and effect.
To solve the above technical problems, the present application provides an auxiliary diagnostic system combining AR and intrinsic fluorescence, comprising:
the intrinsic fluorescence detection module is used for generating excitation light to irradiate a designated part of a patient and collecting an intrinsic fluorescence image generated after the designated part of the patient is irradiated;
the diagnosis module is connected with the intrinsic fluorescence detection module, receives the intrinsic fluorescence image and recognizes lesions in the intrinsic fluorescence image to obtain diagnosis result information;
and the AR module is connected with the diagnosis module, receives the diagnosis result information, and superimposes the diagnosis result information on a real appointed part of the patient for display by using an AR technology.
In one embodiment of the application, the intrinsic fluorescence detection module comprises an excitation light generation module, an image acquisition unit,
the excitation light generation module is used for generating laser for exciting human tissues to emit fluorescence and irradiating the appointed part of the patient, and the image acquisition unit is used for acquiring an intrinsic fluorescence image generated after the appointed part of the patient is irradiated in real time.
In one embodiment of the application, the intrinsic fluorescence image comprises a white light image and a fluorescence image, and the diagnostic module comprises:
the image processing unit is used for preprocessing the white light image and the fluorescent image and synthesizing the preprocessed white light image and the preprocessed fluorescent image into a gray-scale image;
the image analysis unit is used for identifying lesions in the gray-scale image through a machine learning model to obtain diagnosis result information;
and the image labeling unit is used for labeling the diagnosis result information at the corresponding position in the intrinsic fluorescence image according to the set labeling requirement, and generating a labeled image.
In one embodiment of the present application, the pre-processed white light image and the fluorescent image are synthesized into a gray-scale image, which is specifically:
respectively converting the white light image and the fluorescent image into gray-scale images through pixel value conversion, wherein the calculation formula of pixel points in the converted gray-scale images is as follows:
y’=R×a1+G×a2+B×a3+b,
wherein y' represents pixel values of pixel points in the converted gray-scale image, R, G, B represents pixel values of three red, green and blue channels of the pixel points in the gray-scale image at corresponding positions in the original white light image or the fluorescent image respectively, a1, a2 and a3 represent weight parameters of the three red, green and blue channels when the pixel values are converted respectively, and b represents a correction coefficient;
the white light image and the fluorescent image which are converted into the gray-scale image are overlapped to obtain a final gray-scale image, and the overlapping method comprises the following steps:
G=(1-γ)×Y1+γ×Y2,
wherein, G represents the final gray-scale image, Y1 represents the white light image converted into the gray-scale image, Y2 represents the fluorescent image converted into the gray-scale image, and gamma represents the superposition weight.
In one embodiment of the present application, the diagnosis result information obtained by identifying the lesion in the gray-scale image through the machine learning model is specifically:
establishing a decision tree model, acquiring an existing image data set with labels of the designated part of the patient as a training set, training the decision tree model by using the training set, and inputting the gray-scale image into the trained decision tree model to obtain a predicted lesion position, a predicted lesion type and a predicted lesion level.
In one embodiment of the present application, the training of the decision tree model by using the training set inputs the gray-scale image into the trained decision tree model to obtain the predicted lesion position, lesion type and lesion level, specifically:
converting the marked existing image data set of the appointed part of the patient into a marked gray-scale image, and taking the marked gray-scale image as a training set;
randomly selecting data in a training set to form a plurality of subsets, wherein the expression of the subsets is as follows:
wherein,S m represent the firstmThe number of subsets of the set,me {1, …, M }, M representing the total number of subsets,S m each line of the image represents a gray-scale image with marksIn the case of an image of a person,nrepresenting the total number of gray-scale images in the subset with labels,f in represent the firstnThe first of the marked gray-scale imagesiThe characteristics of the device are that,C n represent the firstnMarked lesion positions, lesion categories or lesion levels of the marked gray-scale images;
taking the characteristics in the marked gray-scale image as the internal nodes of the decision tree, and taking the marked lesion positions, lesion categories or lesion levels of the marked gray-scale image as the root nodes of the decision tree to respectively construct decision trees corresponding to each subset;
respectively inputting the rest data in the training set into all decision trees to obtain a lesion position result, a lesion category result or a lesion level result predicted by each decision tree, taking the lesion position result, the lesion category result or the lesion level result with the largest number belonging to the same lesion position result, the lesion category result or the lesion level result in the lesion position result, the lesion category result or the lesion level result predicted by each decision tree as a predicted value, and calculating a loss function according to the predicted value for training;
and respectively putting the gray-scale images into each decision tree after training to obtain the prediction results of the lesion position, the lesion category and the lesion level.
In one embodiment of the application, when the training set is used for training the decision tree model, the gradient of the decision tree model is calculated through a back propagation algorithm, the parameters of the decision tree model are updated reversely, and the prediction precision of the loss function measurement decision tree model is established.
In one embodiment of the application, the AR module comprises AR glasses, a handheld controller,
the AR glasses are used for displaying the diagnosis result information superimposed on the real appointed part of the patient,
the handheld controller is respectively connected with the intrinsic fluorescence detection module and the AR glasses and is used for controlling the mode that the intrinsic fluorescence detection module collects the intrinsic fluorescence image and controlling the superposition display effect of the diagnosis result information.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the functions of the auxiliary diagnostic system combining AR and intrinsic fluorescence.
The application also provides an auxiliary diagnostic device combining AR and intrinsic fluorescence, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the functions of the auxiliary diagnostic system combining AR and intrinsic fluorescence when executing the computer program.
Compared with the prior art, the technical scheme of the application has the following advantages:
according to the application, the pathological condition in the intrinsic fluorescent examination is identified in real time through the diagnosis module, and the image of the appointed part of the patient marked with the pathological condition is displayed to an operator in real time by combining with the AR technology, so that the operator is assisted to perform rapid and effective examination, and the operation complexity is reduced; meanwhile, the missing diagnosis and misdiagnosis caused by manual work are effectively avoided, and the examination effect is improved.
Drawings
In order that the application may be more readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which:
FIG. 1 is a schematic diagram of an auxiliary diagnostic system combining AR and intrinsic fluorescence according to an embodiment of the present application.
FIG. 2 is a schematic flow chart of an auxiliary diagnostic system combining AR and intrinsic fluorescence according to an embodiment of the present application.
Detailed Description
The present application will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the application and practice it.
Referring to FIG. 1, the present application discloses an auxiliary diagnostic system combining AR and intrinsic fluorescence, comprising an intrinsic fluorescence detection module, a diagnostic module, and an AR module. As shown in fig. 2, the flow of the system during operation is specifically:
s1: the intrinsic fluorescence detection module is used for generating an intrinsic fluorescence image generated after the appointed part of the patient is irradiated by the excitation light and collecting the appointed part of the patient.
The intrinsic fluorescence detection module comprises an excitation light generation module and an image acquisition unit, wherein the excitation light generation module is used for generating excitation light with specific wavelength and irradiating a designated part of a patient to excite human tissues to emit fluorescence. The image acquisition unit acquires an intrinsic fluorescence image generated after the appointed part of the patient is irradiated in real time, and the laser generated by the excitation light generation module can also excite human tissues to emit white light in the embodiment, so that the intrinsic fluorescence image comprises a white light image and a fluorescence image, and the white light image refers to a primary color or a color photo. The intrinsic fluorescence detection module in this embodiment may be a colposcope or other human body examination device that performs examination by intrinsic fluorescence.
S2: the diagnosis module is connected with the intrinsic fluorescence detection module, receives the intrinsic fluorescence image and recognizes lesions in the intrinsic fluorescence image to obtain diagnosis result information. The diagnosis module comprises an image processing unit, an image analysis unit and an image labeling unit.
S2-1: the image processing unit preprocesses the white light image and the fluorescent image, and synthesizes the preprocessed white light image and fluorescent image into a gray-scale image.
S2-1-1: firstly, preprocessing such as data conversion, data compression and the like is carried out on the white light image and the fluorescent image so that the image size meets the requirement.
S2-1-2: respectively converting the white light image and the fluorescent image into gray-scale images through pixel value conversion, wherein the calculation formula of pixel points in the converted gray-scale images is as follows:
y’=R×a1+G×a2+B×a3+b,
wherein y' represents pixel values of pixel points in the converted gray-scale image, R, G, B represents pixel values of three channels of red, green and blue of the pixel points in the gray-scale image at corresponding positions in the original white light image or the fluorescent image respectively, and the value range of each channel is between 0 and 255; a1, a2 and a3 respectively represent weight parameters on three red, green and blue channels when pixel values are converted; a1=0.229, a2=0.587, a3=0.114, and b=0.5 in this embodiment.
By converting to a grayscale image, the white light image and the fluorescent image can be converted to unified standards.
S2-1-3: the contrast and brightness of the two converted gray-scale images are respectively adjusted, so that the details of the two converted gray-scale images are more prominent, the definition of the images meets the requirements, and the two gray-scale images specifically comprise:
y’’=α×y’+β,
wherein y' is the pixel value in the gray-scale image after the contrast and brightness are adjusted, alpha is the contrast adjusting coefficient, beta is the brightness adjusting coefficient, and the values of alpha and beta are set according to the actual situation. The adjustment of contrast and brightness is an optional operation, which is performed according to the actual choice.
S2-1-4: the white light image and the fluorescent image which are converted into the gray-scale image are overlapped by using linear mixing algorithm operation so as to more highlight details, and a final gray-scale image is formed by combining the following steps:
G=(1-γ)×Y1+γ×Y2,
wherein G represents a final gray-scale image, Y1 represents a white light image converted into a gray-scale image, Y2 represents a fluorescent image converted into a gray-scale image, gamma represents a superposition weight, and the value range of gamma is 0-1.
In the embodiment, the white light image and the ultraviolet light image are combined together in a gray processing mode to obtain a gray image, so that a wider range and details can be obtained, the details of the bright part and the dark part are reserved, the definition and the characteristics of the image are enhanced, and the details of the intrinsic fluorescence reaction are more intuitively highlighted.
The white light image is irradiated by standard white light, the original image information collected by a camera is adjusted in RGB color, and the white light image is mainly used for adjusting brightness, contrast and saturation and is suitable for medical diagnosis. The image quality evaluation standard is that the contrast, definition and color accuracy of the image are quantitatively and qualitatively evaluated; the conventional method generally compares the formed image with a standard reference image for analysis, and adjusts corresponding parameters according to analysis results. Ultraviolet light images are obtained by irradiating the tissue under examination with ultraviolet light of a specific wavelength, generally referred to herein as excitation light, the wavelength range of which falls within the UVA stage; the tissue to be detected generates weak intrinsic fluorescence through special reaction to excitation light, and image information is acquired through a camera long exposure mode to obtain a fluorescent ultraviolet image, namely an ultraviolet light image; the ultraviolet photo can also adjust RGB color aspects in the image information, and the analysis method is similar to a white photo analysis method by analyzing contrast, definition and color accuracy of the image, and a reference contrast method is generally adopted. Gray scale images are obtained by combining white light images and ultraviolet light images together to obtain a wider range of brightness and detail; and the details of the bright part and the dark part are reserved, so that the definition and the characteristics of the image are enhanced. The details of the intrinsic fluorescence reaction in the image are more intuitively highlighted by means of gray scale processing. In this embodiment, the white light image, the ultraviolet light image, and the gray-scale image are tif or PNG format, and the display standard generally adopts sRGB mode.
S2-2: the image analysis unit identifies lesions in the gray-scale image through a machine learning model to obtain diagnosis result information. Through the learning of machine learning models of a large number of cases, the image analysis unit can automatically judge the part with the lesion in the gray-scale image, and the type of the lesion and the level of the lesion are given after the comparison according to the machine learning models.
The machine learning model used in this embodiment is a decision tree model. The method comprises the steps of obtaining an existing image dataset with labels of a designated part of a patient as a training set, training a decision tree model by using the training set, and inputting a gray-scale image into a trained machine learning model to obtain a predicted lesion position, a predicted lesion type and a predicted lesion level. The method comprises the following steps:
s2-2-1: the method comprises the steps of converting an existing image dataset with labels of a designated part of a patient into a gray-scale image with labels, and taking the gray-scale image with labels as a training set. The method of converting into the annotated gray-scale image may be the method in S2-1.
S2-2-2: the original data set is complex and has high randomness, so that the data is randomly selected from the training set to form a plurality of subsets, and the expression of the subsets is as follows:
wherein,S m represent the firstmThe number of subsets of the set,me {1, …, M }, M representing the total number of subsets, the number of subsets being determined from actual or empirical values;S m each row of the image represents a marked gray-scale image,nrepresenting the total number of gray-scale images in the subset with labels,f in represent the firstnThe first of the marked gray-scale imagesiThe characteristics of the device are that,C n represent the firstnThe marked lesion position, lesion category or lesion level of each marked gray-scale image.
Taking the characteristics in the marked gray-scale image as the internal nodes of the decision tree, and taking the marked lesion positions, lesion categories or lesion levels of the marked gray-scale image as the root nodes of the decision tree to respectively construct decision trees corresponding to each subset; the method of random forest is used in constructing decision tree, and the sensitivity of decision tree to noise data can be reduced.
S2-2-3: and respectively inputting the rest data in the training set into all M decision trees to obtain M classification results, namely a lesion position result, a lesion category result or a lesion level result predicted by each decision tree. And taking the lesion position result, the lesion category result or the lesion level result with the largest number belonging to the same lesion position result, the lesion category result or the lesion level result in the M classification results as a predicted value, and calculating a loss function according to the predicted value for training.
And calculating the gradient of the decision tree model through a back propagation algorithm, reversely updating parameters of the decision tree model, and establishing a loss function to measure the prediction accuracy of the decision tree model. And finishing training when the loss function converges or reaches a preset threshold value, and optimizing the decision tree model by combining the loss function and a back propagation algorithm. After the decision tree model is trained, the marked existing image data set can be additionally obtained as a verification set, and the verification set is used for optimizing the super parameters of the decision tree model so as to further improve the performance and generalization capability of the decision tree model.
The marked category is used as a decision tree of a root node to obtain a predicted value of whether a certain type of lesion occurs at each part in the gray-scale image, and if the certain type of lesion occurs at the part, the part is a lesion position, so that the identification of the certain type of lesion is realized; correspondingly, the marked lesion level is used as a decision tree of the root node to obtain the lesion level of each part in the gray-scale image, so that the effective identification of the lesion position, the lesion type and the lesion level is realized.
S2-2-4: and respectively putting the gray-scale images into each trained decision tree to obtain the prediction results of the lesion position, the lesion category and the lesion level. The image data and lesion recognition result obtained by each detection are also added into the training set, and the training model is used in the next detection.
S2-3: the image labeling unit is used for rapidly labeling the diagnosis result information, namely the positions of the lesions, the types of the lesions and the levels of the lesions identified from the gray-scale image, at the corresponding positions in the intrinsic fluorescence image according to the set labeling requirements, and then automatically generating a labeled image. In this embodiment, the image labeling function of the existing image labeling software is used to display and label the lesion position, the lesion type and the lesion level.
S2-4: and finally, sending the marked image to the AR module.
S3: the AR module is connected with the diagnosis module and receives diagnosis result information, and the diagnosis result information is displayed on a specified part of a real patient in a superimposed mode by using an AR technology. The diagnosis result information in this embodiment may include not only pathological diagnosis results such as a lesion level and a lesion type of a lesion position, but also other auxiliary information such as time, a department, a user name, and the like.
The AR module includes AR glasses and a handheld controller. The AR glasses are used for displaying the diagnosis result information superimposed on the appointed position of the real patient. The hand-held controller is respectively connected with the intrinsic fluorescence detection module and the AR glasses and is used for controlling the mode of the intrinsic fluorescence detection module for collecting the intrinsic fluorescence image, such as operations of enlarging and reducing, image adjusting and the like when the intrinsic fluorescence image is shot; the superposition display effect of the diagnosis result information can be controlled, such as the display of the enlarged, reduced and mobile diagnosis result information.
The application also discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the functions of an auxiliary diagnostic system combining AR and intrinsic fluorescence.
The application also discloses a device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor performing the functions of the auxiliary diagnostic system combining AR and intrinsic fluorescence as achieved when the computer program is executed.
Compared with the prior art, the application has the advantages that:
1. according to the application, the pathological condition in the intrinsic fluorescent examination is identified in real time through the diagnosis module, and the image of the appointed part of the patient marked with the pathological condition is displayed to an operator in real time by combining with the AR technology, so that the operator is assisted to perform rapid and effective examination, and the operation complexity is reduced; meanwhile, the missing diagnosis and misdiagnosis caused by manual work are effectively avoided, and the examination effect is improved.
2. The AR technology and the inherent fluorescent examination are combined, the information in the system and the real scene are integrated through the lamination effect form of the AR glasses, mutual supplement and enhancement are achieved, some information or focus information checked by a patient is displayed in the AR glasses through a virtual marking method, and an operator is effectively assisted in checking and diagnosing.
3. The examination is carried out by the intrinsic fluorescence technology, no auxiliary means or auxiliary medicament is needed, the patient has no uncomfortable feeling, the cross infection is avoided, the pain of the patient is reduced, the continuous observation of waiting for the reaction time is also not needed, the examination belongs to complete non-contact static examination, and an operator directly diagnoses according to objective images. The body of the device does not contact the body of the patient during the examination, and the examination itself does not have any risk.
4. The machine learning model can realize effective identification of lesion positions, lesion categories and lesion levels, and effectively improve the identification effect.
The application can be applied to various detection related to intrinsic fluorescence such as colposcopes, and is more accurate, convenient and rapid on the basis of the technical advantages of the original colposcopes. When the wavelength of excitation light output by the system is 340nm, lesions which are difficult to observe by naked eyes and are 2-3mm below the superficial mucosal tissues can be detected, so that the full coverage from tissue lesions-precancerous lesions (HSIL) -early cancer detection is realized.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present application will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present application.

Claims (10)

1. An auxiliary diagnostic system combining AR and intrinsic fluorescence, comprising:
the intrinsic fluorescence detection module is used for generating excitation light to irradiate a designated part of a patient and collecting an intrinsic fluorescence image generated after the designated part of the patient is irradiated;
the diagnosis module is connected with the intrinsic fluorescence detection module, receives the intrinsic fluorescence image and recognizes lesions in the intrinsic fluorescence image to obtain diagnosis result information;
and the AR module is connected with the diagnosis module, receives the diagnosis result information, and superimposes the diagnosis result information on a real appointed part of the patient for display by using an AR technology.
2. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 1, wherein: the intrinsic fluorescence detection module comprises an excitation light generation module and an image acquisition unit, wherein the excitation light generation module is used for generating laser for exciting human tissues to emit fluorescence and irradiating a designated part of the patient, and the image acquisition unit is used for acquiring an intrinsic fluorescence image generated after the designated part of the patient is irradiated in real time.
3. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 1, wherein: the intrinsic fluorescence image includes a white light image and a fluorescence image, and the diagnostic module includes:
the image processing unit is used for preprocessing the white light image and the fluorescent image and synthesizing the preprocessed white light image and the preprocessed fluorescent image into a gray-scale image;
the image analysis unit is used for identifying lesions in the gray-scale image through a machine learning model to obtain diagnosis result information;
and the image labeling unit is used for labeling the diagnosis result information at the corresponding position in the intrinsic fluorescence image according to the set labeling requirement, and generating a labeled image.
4. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 3, wherein: the pre-processed white light image and fluorescent image are synthesized into a gray-scale image, which is specifically as follows:
respectively converting the white light image and the fluorescent image into gray-scale images through pixel value conversion, wherein the calculation formula of pixel points in the converted gray-scale images is as follows:
y’=R×a1+G×a2+B×a3+b,
wherein y' represents pixel values of pixel points in the converted gray-scale image, R, G, B represents pixel values of three red, green and blue channels of the pixel points in the gray-scale image at corresponding positions in the original white light image or the fluorescent image respectively, a1, a2 and a3 represent weight parameters of the three red, green and blue channels when the pixel values are converted respectively, and b represents a correction coefficient;
the white light image and the fluorescent image which are converted into the gray-scale image are overlapped to obtain a final gray-scale image, and the overlapping method comprises the following steps:
G=(1-γ)×Y1+γ×Y2,
wherein, G represents the final gray-scale image, Y1 represents the white light image converted into the gray-scale image, Y2 represents the fluorescent image converted into the gray-scale image, and gamma represents the superposition weight.
5. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 3, wherein: the diagnosis result information is obtained by identifying lesions in the gray-scale image through a machine learning model, specifically:
establishing a decision tree model, acquiring an existing image data set with labels of the designated part of the patient as a training set, training the decision tree model by using the training set, and inputting the gray-scale image into the trained decision tree model to obtain a predicted lesion position, a predicted lesion type and a predicted lesion level.
6. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 5, wherein: the training of the decision tree model by using the training set, inputting the gray-scale image into the trained decision tree model to obtain the predicted lesion position, the predicted lesion type and the predicted lesion level, and the method specifically comprises the following steps:
converting the marked existing image data set of the appointed part of the patient into a marked gray-scale image, and taking the marked gray-scale image as a training set;
randomly selecting data in a training set to form a plurality of subsets, wherein the expression of the subsets is as follows:
wherein,S m represent the firstmThe number of subsets of the set,me {1, …, M }, M representing the total number of subsets,S m each row of the image represents a marked gray-scale image,nrepresenting the total number of gray-scale images in the subset with labels,f in represent the firstnThe first of the marked gray-scale imagesiThe characteristics of the device are that,C n represent the firstnMarked lesion positions, lesion categories or lesion levels of the marked gray-scale images;
taking the characteristics in the marked gray-scale image as the internal nodes of the decision tree, and taking the marked lesion positions, lesion categories or lesion levels of the marked gray-scale image as the root nodes of the decision tree to respectively construct decision trees corresponding to each subset;
respectively inputting the rest data in the training set into all decision trees to obtain a lesion position result, a lesion category result or a lesion level result predicted by each decision tree, taking the lesion position result, the lesion category result or the lesion level result with the largest number belonging to the same lesion position result, the lesion category result or the lesion level result in the lesion position result, the lesion category result or the lesion level result predicted by each decision tree as a predicted value, and calculating a loss function according to the predicted value for training;
and respectively putting the gray-scale images into each decision tree after training to obtain the prediction results of the lesion position, the lesion category and the lesion level.
7. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 5, wherein: when the training set is used for training the decision tree model, the gradient of the decision tree model is calculated through a back propagation algorithm, the parameters of the decision tree model are updated reversely, and the prediction precision of the loss function measurement decision tree model is established.
8. The auxiliary diagnostic system combining AR and intrinsic fluorescence according to claim 1, wherein: the AR module comprises AR glasses and a handheld controller,
the AR glasses are used for displaying the diagnosis result information superimposed on the real appointed part of the patient,
the handheld controller is respectively connected with the intrinsic fluorescence detection module and the AR glasses and is used for controlling the mode that the intrinsic fluorescence detection module collects the intrinsic fluorescence image and controlling the superposition display effect of the diagnosis result information.
9. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program, when executed by a processor, performs the functions of the auxiliary diagnostic system combining AR and intrinsic fluorescence as claimed in any one of claims 1-8.
10. An auxiliary diagnostic device combining AR and intrinsic fluorescence, characterized by: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the functions of the auxiliary diagnostic system combining AR and intrinsic fluorescence as claimed in any one of claims 1-8 when said computer program is executed.
CN202311284397.7A 2023-10-07 2023-10-07 Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence Pending CN117017232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311284397.7A CN117017232A (en) 2023-10-07 2023-10-07 Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311284397.7A CN117017232A (en) 2023-10-07 2023-10-07 Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence

Publications (1)

Publication Number Publication Date
CN117017232A true CN117017232A (en) 2023-11-10

Family

ID=88639868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311284397.7A Pending CN117017232A (en) 2023-10-07 2023-10-07 Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence

Country Status (1)

Country Link
CN (1) CN117017232A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109497955A (en) * 2018-12-18 2019-03-22 聚品(上海)生物科技有限公司 Human body spontaneous fluorescent illumination excitation and image processing system and method
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110415795A (en) * 2019-08-02 2019-11-05 杭州智团信息技术有限公司 A kind of recognition methods of fluorescent staining CTC image
US20200237229A1 (en) * 2019-01-29 2020-07-30 Board Of Regents, The University Of Texas System Apparatus and method for image-guided interventions with hyperspectral imaging
CN112261906A (en) * 2018-05-14 2021-01-22 诺瓦拉德公司 Calibrating patient image data to patient's actual view using optical code affixed to patient
CN114494092A (en) * 2022-01-11 2022-05-13 卓外(上海)医疗电子科技有限公司 Visible light image and fluorescence image fusion method and system
US20220375047A1 (en) * 2021-05-17 2022-11-24 Stryker Corporation Medical imaging
WO2023287822A2 (en) * 2021-07-12 2023-01-19 The Medical College Of Wisconsin, Inc. Augmented reality-driven guidance for interventional procedures

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261906A (en) * 2018-05-14 2021-01-22 诺瓦拉德公司 Calibrating patient image data to patient's actual view using optical code affixed to patient
CN109497955A (en) * 2018-12-18 2019-03-22 聚品(上海)生物科技有限公司 Human body spontaneous fluorescent illumination excitation and image processing system and method
US20200237229A1 (en) * 2019-01-29 2020-07-30 Board Of Regents, The University Of Texas System Apparatus and method for image-guided interventions with hyperspectral imaging
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110415795A (en) * 2019-08-02 2019-11-05 杭州智团信息技术有限公司 A kind of recognition methods of fluorescent staining CTC image
US20220375047A1 (en) * 2021-05-17 2022-11-24 Stryker Corporation Medical imaging
WO2023287822A2 (en) * 2021-07-12 2023-01-19 The Medical College Of Wisconsin, Inc. Augmented reality-driven guidance for interventional procedures
CN114494092A (en) * 2022-01-11 2022-05-13 卓外(上海)医疗电子科技有限公司 Visible light image and fluorescence image fusion method and system

Similar Documents

Publication Publication Date Title
Marsden et al. Intraoperative margin assessment in oral and oropharyngeal cancer using label-free fluorescence lifetime imaging and machine learning
CN111161290B (en) Image segmentation model construction method, image segmentation method and image segmentation system
US20120061590A1 (en) Selective excitation light fluorescence imaging methods and apparatus
US9330453B2 (en) Apparatus and method for determining a skin inflammation value
US11950760B2 (en) Endoscope apparatus, endoscope operation method, and program
US11948080B2 (en) Image processing method and image processing apparatus
CN107072644B (en) Image forming apparatus with a plurality of image forming units
JP7289296B2 (en) Image processing device, endoscope system, and method of operating image processing device
JP2015500722A (en) Method and apparatus for detecting and quantifying skin symptoms in a skin zone
US20210343011A1 (en) Medical image processing apparatus, endoscope system, and medical image processing method
CN104305957B (en) Wear-type molecular image navigation system
CA3134066A1 (en) Near-infrared fluorescence imaging for blood flow and perfusion visualization and related systems and computer program products
JPWO2012153568A1 (en) Medical image processing device
Marsden et al. FLImBrush: dynamic visualization of intraoperative free-hand fiber-based fluorescence lifetime imaging
US20180220893A1 (en) Region of interest tracing apparatus
CN115153397A (en) Imaging method for endoscopic camera system and endoscopic camera system
US20190239749A1 (en) Imaging apparatus
US20230363697A1 (en) Acne severity grading methods and apparatuses
CN113425440A (en) System and method for detecting caries and position thereof based on artificial intelligence
JP2020141995A (en) Endoscopic image learning apparatus, method, program, and endoscopic image recognition apparatus
CN117017232A (en) Auxiliary diagnostic systems, media and devices combining AR and intrinsic fluorescence
US20220222840A1 (en) Control device, image processing method, and storage medium
CN113693724B (en) Irradiation method, device and storage medium suitable for fluorescence image navigation operation
US11295443B2 (en) Identification apparatus, identifier training method, identification method, and recording medium
EP4189643A1 (en) Processing of multiple luminescence images globally for their mapping and/or segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination