CN117152362B - Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum - Google Patents

Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum Download PDF

Info

Publication number
CN117152362B
CN117152362B CN202311407775.6A CN202311407775A CN117152362B CN 117152362 B CN117152362 B CN 117152362B CN 202311407775 A CN202311407775 A CN 202311407775A CN 117152362 B CN117152362 B CN 117152362B
Authority
CN
China
Prior art keywords
data
model
spectrum
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311407775.6A
Other languages
Chinese (zh)
Other versions
CN117152362A (en
Inventor
岑立剑
廖艳春
杨建中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongan Shida Technology Co ltd
Original Assignee
Shenzhen Zhongan Shida Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongan Shida Technology Co ltd filed Critical Shenzhen Zhongan Shida Technology Co ltd
Priority to CN202311407775.6A priority Critical patent/CN117152362B/en
Publication of CN117152362A publication Critical patent/CN117152362A/en
Application granted granted Critical
Publication of CN117152362B publication Critical patent/CN117152362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00194Optical arrangements adapted for three-dimensional imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6846Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
    • A61B5/6847Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive mounted on an invasive device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Endoscopes (AREA)

Abstract

The invention relates to the technical field of image processing, and discloses an endoscopic multispectral multipath imaging method, device and equipment and a storage medium, which are used for improving the quality of endoscopic multispectral multipath imaging. The method comprises the following steps: acquiring first spectral image data for each channel by a plurality of image sensors in the endoscope; constructing a light source model of each channel and performing spectrum correction to obtain a plurality of second spectrum image data; multidimensional data fusion is carried out on the plurality of second spectrum image data, multidimensional data three-dimensional is generated, and weighting treatment is carried out, so that a fused multidimensional pixel matrix is obtained; performing data dimension reduction and feature extraction to obtain target dimension reduction pixel data, and performing data reconstruction and model construction to obtain a spectrum three-dimensional data model; performing image feature analysis to obtain an image feature analysis result; and carrying out three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.

Description

Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a multi-path imaging method, apparatus, device, and storage medium for endoscope multispectral.
Background
In the field of modern medical imaging, endoscopic techniques play an important role in disease diagnosis and treatment. However, conventional endoscopic imaging methods often provide only surface or single information, and it is difficult to fully understand tissue structure and lesion condition. To overcome these limitations, researchers have begun to focus on multispectral imaging techniques to obtain more abundant biological tissue information.
Multispectral imaging techniques combine spectral information over different wavelength ranges to provide optical properties of tissue, blood supply, and other distribution of biomolecules. However, acquiring multispectral images alone is not sufficient to fully understand the complexities of the tissue, and therefore, it is desirable to combine the multichannel spectral images with other information to obtain a more comprehensive resolution. However, there are also limitations in endoscopic applications of the existing solutions, which in turn result in low imaging quality of the existing solutions.
Disclosure of Invention
The invention provides a multi-path imaging method, device and equipment for endoscope multi-spectrum and a storage medium, which are used for improving the quality of multi-path imaging for the endoscope multi-spectrum.
The first aspect of the present invention provides an endoscopic multispectral multiplex imaging method, comprising:
Acquiring spectral images of a plurality of channels through a plurality of image sensors in an endoscope respectively to obtain first spectral image data of each channel, wherein each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
Acquiring spectrum distribution data of each channel, constructing a light source model of each channel according to the spectrum distribution data, and respectively carrying out spectrum correction on the first spectrum image data according to the light source model to obtain a plurality of second spectrum image data;
multidimensional data fusion is carried out on the plurality of second spectrum image data to generate multidimensional data stereo, and the multidimensional data stereo is weighted to obtain a fused multidimensional pixel matrix;
Performing data dimension reduction and feature extraction on the fusion multi-dimensional pixel matrix to obtain target dimension reduction pixel data, and performing data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model;
Carrying out image feature analysis on the spectrum three-dimensional data model through a preset image feature analysis model to obtain an image feature analysis result;
and carrying out three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the obtaining spectral distribution data of each channel, constructing a light source model of each channel according to the spectral distribution data, and performing spectral correction on the first spectral image data according to the light source model to obtain a plurality of second spectral image data respectively, where the method includes:
Acquiring spectrum distribution data of each channel, and carrying out segmentation processing on the spectrum distribution data to obtain a plurality of spectrum segmentation data corresponding to each spectrum distribution data;
respectively performing spline fitting on the plurality of spectrum segment data through a preset cubic polynomial to obtain a plurality of spline segments;
Smoothing the spline segments to obtain a light source model of each channel;
Carrying out light intensity prediction on the channels according to the light source model to obtain light intensity prediction data of each channel;
and acquiring the light intensity real data of the first spectrum image data, and correcting the light intensity prediction data and the light intensity real data in a spectrum proportion to obtain a plurality of second spectrum image data.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing multidimensional data fusion on the plurality of second spectral image data to generate a multidimensional data stereo, and performing weighting processing on the multidimensional data stereo to obtain a fused multidimensional pixel matrix includes:
Performing image alignment and brightness equalization on the plurality of second spectrum image data to obtain a plurality of standard spectrum image data;
stacking the plurality of standard spectrum image data according to the channel sequence of the plurality of channels to generate a multi-dimensional data stereo;
And respectively carrying out weight distribution on the channels to obtain a first channel weight of each channel, and carrying out pixel weighted fusion on the multi-dimensional data three-dimensional according to the first channel weight to obtain a fused multi-dimensional pixel matrix.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, performing data dimension reduction and feature extraction on the fused multidimensional pixel matrix to obtain target dimension reduction pixel data, and performing data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model, where the method includes:
Respectively calculating the pixel average value of each channel in the fusion multi-dimensional pixel matrix, and carrying out centering treatment on the fusion multi-dimensional pixel matrix according to the pixel average value to obtain a centering pixel matrix;
Calculating a covariance matrix corresponding to the centralized pixel matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain eigenvalues and corresponding eigenvectors;
Selecting main component characteristics according to the magnitude of the characteristic values, and performing dimension reduction mapping on the fused multi-dimensional pixel matrix according to the main component characteristics to obtain target dimension reduction pixel data;
and respectively extracting spectral features and spatial distribution features of the fused multidimensional pixel matrix to obtain spectral features and spatial distribution features, and carrying out data reconstruction and model construction on the target dimensionality reduction pixel data according to the spectral features and the spatial distribution features to obtain a spectral three-dimensional data model.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the extracting spectral features and spatial distribution features of the fused multidimensional pixel matrix to obtain spectral features and spatial distribution features, and performing data reconstruction and model construction on the target dimensionality reduction pixel data according to the spectral features and the spatial distribution features to obtain a spectral stereoscopic data model includes:
Spectral response feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spectral features, and pixel spatial distribution feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spatial distribution features;
Carrying out data reconstruction on the target dimensionality reduction pixel data to obtain an initial spectrum model, wherein each pixel point in the initial spectrum model represents data combined with spectrum information of different channels;
and carrying out model construction on the initial spectrum model according to the spectrum characteristics and the space distribution characteristics to obtain a spectrum three-dimensional data model.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the performing, by using a preset image feature analysis model, image feature analysis on the spectral stereo data model to obtain an image feature analysis result includes:
Inputting a preset image characteristic analysis model to the spectrum three-dimensional data model, wherein the image characteristic analysis model comprises: convolutional neural networks and recurrent neural networks;
carrying out convolution processing on the spectrum three-dimensional data model through the convolution neural network to obtain model surface characteristics;
extracting time sequence relation features of the spectrum three-dimensional data model through the cyclic neural network to obtain model time sequence features;
And generating an image characteristic analysis result of the spectrum three-dimensional data model according to the model surface characteristic and the model time sequence characteristic.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the performing three-dimensional imaging on the image feature analysis result and the spectral stereo data model to obtain three-dimensional imaging data of an endoscope includes:
Decoding the image characteristic analysis result and the spectrum stereo data model to obtain image characteristic decoding data and data model decoding data;
performing three-dimensional mapping on the image feature decoding data and the data model decoding data to obtain a target three-dimensional model;
And carrying out pixel value interpolation and enhancement on the target three-dimensional model to obtain the three-dimensional imaging data of the endoscope.
A second aspect of the present invention provides an endoscopic multispectral multiplex imaging device comprising:
The acquisition module is used for respectively acquiring spectral images of a plurality of channels through a plurality of image sensors in the endoscope to obtain first spectral image data of each channel, wherein each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
The construction module is used for acquiring the spectrum distribution data of each channel, constructing a light source model of each channel according to the spectrum distribution data, and carrying out spectrum correction on the first spectrum image data according to the light source model to obtain a plurality of second spectrum image data;
The fusion module is used for carrying out multidimensional data fusion on the plurality of second spectrum image data to generate multidimensional data stereo, and carrying out weighting treatment on the multidimensional data stereo to obtain a fused multidimensional pixel matrix;
The reconstruction module is used for carrying out data dimension reduction and feature extraction on the fusion multidimensional pixel matrix to obtain target dimension reduction pixel data, and carrying out data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model;
The analysis module is used for carrying out image feature analysis on the spectrum three-dimensional data model through a preset image feature analysis model to obtain an image feature analysis result;
and the imaging module is used for carrying out three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.
A third aspect of the present invention provides an endoscopic multispectral multiplexed imaging apparatus comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the endoscopic multispectral multiplexed imaging device to perform the endoscopic multispectral multiplexed imaging method described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the endoscopic multispectral multiplexed imaging method described above.
In the technical scheme provided by the invention, first spectrum image data of each channel are acquired through a plurality of image sensors in an endoscope; constructing a light source model of each channel and performing spectrum correction to obtain a plurality of second spectrum image data; multidimensional data fusion is carried out on the plurality of second spectrum image data, multidimensional data three-dimensional is generated, and weighting treatment is carried out, so that a fused multidimensional pixel matrix is obtained; performing data dimension reduction and feature extraction to obtain target dimension reduction pixel data, and performing data reconstruction and model construction to obtain a spectrum three-dimensional data model; performing image feature analysis to obtain an image feature analysis result; the invention can acquire the spectral distribution of tissues under different wavelengths by utilizing the spectral information of a plurality of channels, thereby providing richer biological information. By the light source correction, the influence of the light source fluctuation on the image quality can be reduced. The multi-dimensional data fusion and feature extraction make the image information more representative and interpretable. By applying the deep learning technology, the convolutional neural network and the cyclic neural network, more complex features can be extracted from the spectrum three-dimensional data model, and then the quality of multi-path imaging of the endoscope multispectral is improved.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a multi-channel imaging method for endoscope multi-spectrum in an embodiment of the present invention;
FIG. 2 is a flow chart of multi-dimensional data fusion in an embodiment of the invention;
FIG. 3 is a flow chart of data dimension reduction and feature extraction in an embodiment of the invention;
FIG. 4 is a flow chart of data reconstruction and model construction in an embodiment of the invention;
FIG. 5 is a schematic diagram of one embodiment of an endoscopic multispectral multiplexed imaging device in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of one embodiment of an endoscopic multispectral multiplexed imaging apparatus in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an endoscopic multispectral multipath imaging method, device, equipment and storage medium, which are used for improving the quality of endoscopic multispectral multipath imaging. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and one embodiment of an endoscopic multispectral multipath imaging method according to an embodiment of the present invention includes:
S101, respectively acquiring spectral images of a plurality of channels through a plurality of image sensors in an endoscope to obtain first spectral image data of each channel, wherein each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
It will be appreciated that the subject of the present invention may be an endoscopic multispectral multi-channel imaging device, or a terminal or server, and is not limited in this regard. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, the server first selects an appropriate image sensor. These sensors need to have high resolution, low noise and good spectral characteristics to ensure that the acquired data is accurate and reliable. Each sensor should correspond to a particular wavelength range in order to acquire spectral data for multiple channels. For example, a visible light sensor may cover the visible light band, an infrared sensor is responsible for the infrared spectrum range, and so on. Second, the design of the optical system is required to ensure that light from the different channels is properly focused and distributed onto the respective sensors. This involves the use of appropriate optical filters or gratings to separate the spectral information at different wavelengths. This ensures that each channel receives only light within its preset wavelength range. Next, it is ensured that the image acquisitions of the different sensors are performed simultaneously. This requires an accurate time synchronization system to ensure that the images of each channel are acquired at the same point in time. This is important for subsequent data processing to avoid data mismatch problems. Once the sensor selection, optical design and simultaneous acquisition settings are in place, acquisition of spectral image data may begin. Each sensor is responsible for collecting spectral information in its corresponding wavelength range, which will be the first spectral image data for each channel. Ultimately, these first spectral image data may be used for further analysis and processing, such as spectral correction, data fusion, and feature extraction, to obtain more comprehensive spectral information and image features. For example, four different image sensors are integrated in an endoscope. Sensor 1 covers the visible range, sensor 2 covers the infrared range, sensor 3 covers the near infrared range, and sensor 4 covers the ultraviolet range. For each sensor, an appropriate wavelength range is selected to detect different spectral characteristics. For example, the visible light sensor may cover a range of 400 nanometers to 700 nanometers. The optical system is designed to include filters to ensure that each sensor receives only light within a specific wavelength range. Using an accurate time synchronization system, it is ensured that four sensors capture image data simultaneously. The endoscope captures images of different wavelength ranges during the detection process. These images will become the first spectral image data for each channel.
S102, acquiring spectrum distribution data of each channel, constructing a light source model of each channel according to the spectrum distribution data, and respectively carrying out spectrum correction on the first spectrum image data according to the light source model to obtain a plurality of second spectrum image data;
Specifically, the server first acquires spectral distribution data, which is the basis of multispectral imaging. For each channel, its spectral distribution data needs to be collected, describing the intensity distribution of light of different wavelengths in that channel. Next, the spectral distribution data is segmented into a plurality of spectral segment data, each segment corresponding to a segment in a different wavelength range. Each spectrum segment data is then spline fitted using a preset cubic polynomial to obtain a plurality of spline segments. These spline segments approximate the curve of the spectral distribution data, helping to build a model of the light source. And then, carrying out smoothing treatment on the spline sections to reduce noise and irregularity and improve the accuracy of the model. The smoothing helps to better fit the model to the actual data. Then, the intensity of light in different wavelength ranges can be predicted from the model by predicting the intensity of light for each channel using the resulting model of the light source. Finally, the light intensity real data of the first spectrum image data are obtained and corrected with the light intensity prediction data in a spectrum proportion, so that the first spectrum image data can be corrected to match the spectrum characteristics predicted by the model, and a plurality of second spectrum image data are obtained. For example, assume that an endoscope is used for early diagnosis of lung cancer. First, an endoscope is introduced into the patient's trachea and bronchi to examine the presence of cancerous lesions. At this time, the endoscope is equipped with an image sensor of a plurality of channels, each corresponding to spectral information of a different wavelength range. Second, the spectral image data acquired by these sensors is analyzed to obtain spectral distribution data in the lung tissue. This includes data for channels of infrared, near infrared, and visible light, each capturing light of a different wavelength range. The spectral distribution data for each channel is then segmented into a plurality of spectral segmentation data, such as segments of oxyhemoglobin and deoxyhemoglobin. Each of the spectral segmentation data is then spline fitted using a cubic polynomial to obtain a plurality of spline segments that better approximate the characteristics of the spectral distribution. Then, the spline segments are smoothed to reduce noise and irregularities in the data, ensuring the accuracy of the illuminant model. Finally, the light intensity prediction is performed by using the light source model to determine the intensity distribution of the light in different wavelength ranges. This provides more accurate information for the detection of early lung cancer. And simultaneously, carrying out spectrum proportion correction on the actual light intensity data and the light intensity prediction data of the first spectrum image data so as to ensure that the first spectrum image data reflects the actual spectrum characteristics of lung tissues.
S103, carrying out multidimensional data fusion on the plurality of second spectrum image data to generate multidimensional data stereo, and carrying out weighting treatment on the multidimensional data stereo to obtain a fused multidimensional pixel matrix;
First, image alignment and luminance equalization are performed on a plurality of second spectral image data. This ensures that the images of the different channels are spatially aligned for subsequent data fusion. Luminance equalization may ensure that the image of each channel has a similar luminance level, thereby reducing the differences between the different channels. And stacking a plurality of standard spectrum image data according to the channel sequence of the channels to generate a multi-dimensional data stereo. The images of each channel will be superimposed in their order to form a multi-dimensional data volume in which the information of each channel is represented. Next, a plurality of channels are respectively assigned weights to determine a first channel weight for each channel. These weights may be determined according to the needs of the application, and generally depend on the importance of the channel. For example, for medical imaging, the visible light channel and the infrared light channel have different importance. And then, carrying out pixel weighted fusion on the multidimensional data stereo according to the first channel weight. The image pixels of each channel will be fused according to their weights to generate a fused multi-dimensional pixel matrix. This step takes into account the contribution of each channel to ensure that the composite image has the best information quality. For example, assume that a server is using an endoscope for diagnosis of gastrointestinal diseases. The endoscope is equipped with three channels of different wavelength ranges, capturing image data of visible light, infrared light and near infrared light, respectively. First, the server performs image alignment and luminance equalization on the images of these channels. Then, they are stacked together to form a three-dimensional data volume containing information of different wavelengths. Next, the server determines the importance of the channel. In this case, the infrared light channel is considered to have a higher sensitivity to detect a specific disease, and thus is given a higher weight in the weight distribution. And finally, the server performs pixel weighted fusion on the multidimensional data three-dimensional according to the weight. In this way, the server obtains a fused multi-dimensional pixel matrix, which contains information of visible light, infrared light and near infrared light, and synthesizes the information according to the weight of the information. This composite image may provide more comprehensive information that helps the physician make more accurate diagnostic and therapeutic decisions.
S104, performing data dimension reduction and feature extraction on the fusion multi-dimensional pixel matrix to obtain target dimension reduction pixel data, and performing data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model;
Specifically, first, the pixel average value of each channel in the fused multi-dimensional pixel matrix is calculated, which helps to understand the overall brightness level of each channel. The fused multi-dimensional pixel matrix is then centered based on these pixel averages to better describe the variation of the data. Then, the server obtains the eigenvalues and corresponding eigenvectors by calculating the covariance matrix of the centered pixel matrix and performing eigenvalue decomposition, which information reflects the main direction of change and intensity of the data. Depending on the size of the feature values, the server selects principal component features, typically those features with larger feature values, as they contain the most important information in the data. And then, performing dimension reduction mapping on the fusion multidimensional pixel matrix by using the selected principal component characteristics to obtain target dimension reduction pixel data. This step maps the high-dimensional data to a lower-dimensional space while retaining important information. Meanwhile, the server also needs to extract spectral features and spatial distribution features. For spectral features, the server calculates spectral statistics, such as average, maximum, and minimum spectra, for each channel to capture the spectral differences between the different channels. For spatially distributed features, the server extracts relationships between pixels, such as texture features and spatial frequency features, to capture the spatially distributed pattern of data. And finally, according to the extracted spectrum and spatial characteristics, the server performs data reconstruction and model construction. The objective is to recombine the data to obtain a spectral volumetric data model. The spectroscopic stereo data model incorporates the advantages of multispectral imaging, including optical properties, blood supply and tissue morphology, and can be used for further analysis, diagnosis or other fields of application. For example, assume that the server uses multispectral imaging techniques in gastroscopy. The server captures images of multiple channels, including visible and infrared channels. First, the server calculates the pixel average value of each channel, and then performs a centering process on the image. Then, the server selects the main features by eigenvalue decomposition of the covariance matrix, mapping the data dimension reduction to a more compact space. At the same time, the server extracts spectral features, such as average spectra and spectral differences, as well as spatial features, such as texture features and spatial frequency features. Finally, based on these features, the server reconstructs the data and constructs a spectral stereoscopic data model that includes a comprehensive description of spectral information and spatial characteristics about the stomach tissue, which helps to improve the diagnostic accuracy of gastric disease.
First, for each pixel point in the fused multi-dimensional pixel matrix, the extraction of spectral features includes calculating the spectral responses of different channels, such as light intensities or reflectivities over different wavelength ranges. These features reflect the optical properties, such as color and spectral features, of each pixel. Meanwhile, the extraction of the spatial distribution feature includes analyzing the relationship between pixels, such as texture, spatial frequency, etc., to reflect the spatial distribution pattern of the data. Next, data reconstruction is performed on the target dimensionality-reduced pixel data using the extracted spectral features and spatial distribution features. The objective is to recombine the pixel data to create an initial spectral model. In this model, each pixel represents data combining spectral information of different channels while taking into account the spatial distribution relationship between pixels. And finally, constructing a spectrum three-dimensional data model according to the extracted spectrum characteristics and the spatial distribution characteristics. This model will more fully describe the data of multispectral imaging, including optical properties, blood supply, and tissue morphology. The construction of models involves machine learning methods, statistical modeling or other data modeling techniques to integrate different features into one comprehensive model. For example, assume that the server uses endoscopic multispectral imaging techniques to study liver tissue. The server collects multi-channel spectral images, including visible and infrared spectra. First, the server extracts spectral features, such as average spectra, spectral differences, and spectral peaks, from each pixel point, while extracting spatial distribution features, such as texture and spatial frequencies. The server then uses these features to reconstruct the target dimensionality-reduced pixel data, creating an initial spectral model. Finally, a spectral stereoscopic data model is constructed by the server by utilizing the spectral features and the spatial distribution features, so that the optical characteristics, the blood supply condition and the tissue morphology of the liver tissue can be more comprehensively known.
S105, performing image feature analysis on the optical stereo data model through a preset image feature analysis model to obtain an image feature analysis result;
Specifically, the server first builds an image feature analysis model in advance. This model typically includes deep learning components such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). CNNs are used to process spatial features of images, while RNNs are used to capture timing relationship features. This model will be used to analyze the spectral volumetric data model. The spectral stereoscopic data model is provided as input to a preset image feature analysis model. This data model contains complex information of multispectral imaging data including optical properties, blood supply and tissue morphology. And carrying out convolution processing on the optical stereo data model through a convolution neural network to extract the surface characteristics of the model. Convolution operations help to capture spatial features in an image, identify edges, textures, patterns, and the like. These surface features reflect the local nature of the tissue structure. And extracting time sequence relation features of the optical stereo data model through a cyclic neural network. This step helps to analyze dynamic information in the model, such as changes in blood flow or time-series evolution of tissue. The RNN may capture temporal correlations and sequence patterns in the data. And finally, generating an image feature analysis result of the spectrum three-dimensional data model according to the model surface features extracted by the convolutional neural network and the model time sequence features extracted by the cyclic neural network. These results will reflect the deep information of the multispectral imaging data, including the optical properties and dynamic changes of the tissue. For example, assume that a server is using endoscopic multispectral imaging techniques to examine lung tissue of a patient. The server has acquired multi-channel multi-spectral image data, including visible and infrared spectra. First, the server builds a preset image feature analysis model, which includes convolutional neural networks and recurrent neural networks. The server then inputs a spectral stereoscopic data model into this model. The convolutional neural network processing stage identifies spatial features of the pulmonary tissue, such as alveolar structure, vascularity, etc. These features help the server understand the morphology and optical properties of the organization. The recurrent neural network processing phase may be concerned with timing relationships in the data, such as respiratory movement of lung tissue or dynamic changes in blood supply. The information in this regard is critical to identifying abnormal situations. And finally, generating an image characteristic analysis result of the lung tissue by the server according to the output of the convolutional neural network and the cyclic neural network. These results may include labeling of abnormal regions, estimation of blood flow velocity, and analysis of optical properties, helping physicians to more accurately diagnose pulmonary disease.
S106, performing three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.
Specifically, first, the already obtained image feature analysis result and the spectral stereoscopic data model are decoded. The previously extracted feature and model information is converted into a data form that can be used for three-dimensional imaging. The image feature decoding data contains analyzed spatial features and the data model decoding data contains optical properties of the multispectral data. Next, three-dimensional mapping is performed using the decoded image features and the data model. The image features are combined with the optical properties to reconstruct the three-dimensional structure of the target object. By mapping image features onto optical properties, three-dimensional data with spatial information can be obtained. And generating a three-dimensional model of the target on the basis of the three-dimensional mapping. This three-dimensional model represents the shape and structure of the tissue or object observed by the endoscope. The model may be used for further analysis and visualization. And finally, carrying out interpolation and enhancement of pixel values on the target three-dimensional model. This step helps to improve the quality and visual effect of the three-dimensional imaging data. Interpolation can fill in potential data loss and enhance the sharpness and detail of the image. Through this step, the resulting endoscopic three-dimensional imaging data becomes more useful for medical diagnosis and research. For example, assume that a server is using endoscopic multispectral imaging techniques to study the patient's digestive tract tissue. The server has acquired multi-channel multi-spectral image data, including visible and infrared spectra. The server also uses a deep learning model to perform feature analysis on the images, and extracts the spatial features and optical characteristics of the tissues. First, the server decodes the image feature analysis result and the spectral stereoscopic data model. This step converts the output of the deep learning model into a data format that can be used for three-dimensional imaging. The image feature decoding data contains information of spatial features and the data model decoding data contains information of optical characteristics. Next, the server performs a three-dimensional mapping, combining the image features and the data model to reconstruct a three-dimensional structure of the digestive tract tissue. This process maps image features onto optical properties, generating three-dimensional data with spatial information. The server then generates a target three-dimensional model of the digestive tract tissue, which reflects its shape and structure. This model can be used for further analysis by a physician, for example for diagnosing tumors or other lesions. Finally, interpolation and enhancement of pixel values are carried out on the three-dimensional model so as to improve the quality of imaging data. This includes removing noise, enhancing contrast, and filling in data loss.
In the embodiment of the invention, the first spectrum image data of each channel is acquired by a plurality of image sensors in an endoscope; constructing a light source model of each channel and performing spectrum correction to obtain a plurality of second spectrum image data; multidimensional data fusion is carried out on the plurality of second spectrum image data, multidimensional data three-dimensional is generated, and weighting treatment is carried out, so that a fused multidimensional pixel matrix is obtained; performing data dimension reduction and feature extraction to obtain target dimension reduction pixel data, and performing data reconstruction and model construction to obtain a spectrum three-dimensional data model; performing image feature analysis to obtain an image feature analysis result; the invention can acquire the spectral distribution of tissues under different wavelengths by utilizing the spectral information of a plurality of channels, thereby providing richer biological information. By the light source correction, the influence of the light source fluctuation on the image quality can be reduced. The multi-dimensional data fusion and feature extraction make the image information more representative and interpretable. By applying the deep learning technology, the convolutional neural network and the cyclic neural network, more complex features can be extracted from the spectrum three-dimensional data model, and then the quality of multi-path imaging of the endoscope multispectral is improved.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Acquiring spectrum distribution data of each channel, and carrying out segmentation processing on the spectrum distribution data to obtain a plurality of spectrum segmentation data corresponding to each spectrum distribution data;
(2) Respectively performing spline fitting on the plurality of spectrum segment data through a preset cubic polynomial to obtain a plurality of spline segments;
(3) Smoothing the spline segments to obtain a light source model of each channel;
(4) Carrying out light intensity prediction on a plurality of channels according to the light source model to obtain light intensity prediction data of each channel;
(5) And acquiring the light intensity real data of the first spectrum image data, and correcting the spectrum proportion of the light intensity predicted data and the light intensity real data to obtain a plurality of second spectrum image data.
Specifically, the server first acquires spectral images of a plurality of channels in the endoscope using a multispectral imaging device. Each channel corresponds to a predetermined wavelength range, and thus, the image of each channel contains spectral information within a specific wavelength range. These image data can be expressed as a relationship between spectral intensity and wavelength. The spectral distribution data of each channel is segmented into different segments or intervals for finer analysis of the optical properties. The segmentation process may take different approaches, such as dividing the data into equal or unequal-width wavelength bands, or segmenting based on changes in specific optical characteristics. The purpose of this step is to make the spectroscopic data easier to process and analyze. For each segmented spectral data, the data is fitted to a spline segment of the spectral curve using a preset cubic polynomial or other fitting method. Spline fitting is a mathematical technique that can be used to approximate and smooth spectral data to obtain a continuous spectral curve. These spline segments will represent the spectral characteristics of each segment. And smoothing the generated spline segment to eliminate the existing noise or abnormal value. Smoothing may be implemented using filters or other signal processing techniques to ensure stability and accuracy of the spectral data. This step helps to obtain a more reliable model of the optical properties. And constructing a light source model based on the spline segment and the spectral distribution information of each channel. The light source model describes the light source characteristics, including spectral distribution and intensity variation, in each channel in the endoscope. According to the light source model, light intensity prediction can be performed to obtain light intensity prediction data of each channel. These data reflect the light intensity distribution of the light source in the endoscope. Light intensity real data of the first spectral image data are acquired, and the data are obtained through calibration or reference standard measurement. The light intensity prediction data is then compared with the light intensity real data to make a spectral ratio correction. This step helps to adjust the light intensity data to more accurately reflect the actual situation. Finally, a plurality of second spectral image data are obtained according to the corrected data, and the image data are more accurate and reliable and can be used for further analysis and application. For example, assume that a server is using endoscopic multispectral imaging techniques to study the gastric mucosa of a patient. The server collects multispectral image data using a plurality of channels, covering the visible and infrared spectral ranges. First, the server performs a segmentation process on the spectral distribution data of each channel, dividing the spectral data into different wavelength bands. Then, a polynomial fit is performed three times on the data of each wavelength band, and spline segments are generated. These spline segments represent the optical characteristics within each wavelength band. Next, the server performs smoothing processing on the spline segment to remove noise. Then, from these spline segments, a illuminant model is constructed, describing illuminant characteristics within the gastric mucosa. According to the light source model, the server predicts the light intensity distribution of each channel. Then, the server acquires the light intensity real data of the first spectral image data by reference standard measurement. By comparing the light intensity prediction data with the light intensity real data, the spectrum proportion correction is performed so as to ensure the accuracy of the data. Finally, the server obtains a plurality of second spectral image data that have been corrected and are ready for medical diagnosis and study to more fully understand the optical characteristics and biological tissue information of the gastric mucosa.
In a specific embodiment, as shown in fig. 2, the process of performing step S103 may specifically include the following steps:
S201, performing image alignment and brightness equalization on a plurality of second spectrum image data to obtain a plurality of standard spectrum image data;
S202, stacking a plurality of standard spectrum image data according to the channel sequence of a plurality of channels to generate a multi-dimensional data stereo;
And S203, respectively carrying out weight distribution on the channels to obtain a first channel weight of each channel, and carrying out pixel weighted fusion on the multi-dimensional data three-dimensional according to the first channel weight to obtain a fused multi-dimensional pixel matrix.
Specifically, the server first performs image alignment on the plurality of second spectral image data, ensuring that they are spatially aligned for subsequent processing. At the same time, brightness equalization is performed to ensure that the brightness of the image is consistent between different channels. This may be achieved by image processing techniques such as histogram equalization. The result is a plurality of standard spectral image data, which images have been normalized in alignment and brightness. Then, a plurality of standard spectral image data are stacked in the order of the channels they represent. This will produce a multi-dimensional data volume in which each channel represents a different optical property or band. Such stereo data is very useful in analyzing multi-channel information. Next, a weight is assigned to each channel. The weights reflect the importance of each channel in the overall multidimensional data. These weights may be determined based on a priori knowledge, experimental data, or automated algorithms. The assignment of weights needs to take into account the contribution of each channel in solving a particular problem or analyzing particular information. And carrying out pixel weighted fusion on the multidimensional data stereo according to the assigned weight. This means that the pixel values of the different channels are added but the pixel values of each channel are weighted according to their respective weights. In this way, each pixel in the fused multi-dimensional pixel matrix will contain information for multiple channels, and the contribution of each channel will be adjusted according to its weight. For example, assume that a server is studying multispectral imaging in the ophthalmic field to better understand the retinal condition of a patient. The server uses second spectral image data of a plurality of channels, each representing a different wavelength band or optical characteristic. First, the server performs image alignment on these second spectral images, ensures that they are consistent in position on the retina, and performs luminance equalization so that they are consistent in luminance. This is because spectral imaging is affected by illumination conditions or instrument differences and therefore requires a standardized process. The server then stacks the standard spectral images together in the order of the channels they represent, creating a multi-dimensional data volume, each channel representing a different wavelength band, e.g., red, green, and blue. Next, the server assigns weights to each channel, which weights are based on previous study or experimental data. For example, if the server knows that red light is important for detecting retinal vascular problems, the server assigns a higher weight to the red light channel. And finally, the server performs pixel weighted fusion on the multidimensional data three-dimensional according to the distributed weights. This step will produce a fused multi-dimensional pixel matrix in which each pixel contains information for the red, green and blue channels, and the contribution of each channel is determined by its corresponding weight. This fused data can be used for further retinal analysis to help doctors more fully understand the patient's retinal health.
In a specific embodiment, as shown in fig. 3, the process of executing step S104 may specifically include the following steps:
S301, respectively calculating the average value of pixels of each channel in the fusion multi-dimensional pixel matrix, and carrying out centering treatment on the fusion multi-dimensional pixel matrix according to the average value of the pixels to obtain a centering pixel matrix;
S302, calculating a covariance matrix corresponding to the centralized pixel matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain eigenvalues and corresponding eigenvectors;
S303, selecting main component characteristics according to the size of the characteristic values, and performing dimension reduction mapping on the fusion multi-dimensional pixel matrix according to the main component characteristics to obtain target dimension reduction pixel data;
S304, respectively extracting spectral features and spatial distribution features of the fusion multidimensional pixel matrix to obtain spectral features and spatial distribution features, and carrying out data reconstruction and model construction on target dimensionality reduction pixel data according to the spectral features and the spatial distribution features to obtain a spectral three-dimensional data model.
Specifically, the server first calculates the pixel average value of each channel in the fused multi-dimensional pixel matrix, respectively. This will provide an average pixel value for each channel. The whole fused multi-dimensional pixel matrix is then centred on these averages, i.e. each pixel value minus the average value of the corresponding channel. This step results in a centred pixel matrix which facilitates further analysis of the data. Based on the centralized pixel matrix, a covariance matrix is calculated. Covariance matrices describe the correlation and variation between different channels. And then, carrying out eigenvalue decomposition on the covariance matrix to obtain eigenvalues and corresponding eigenvectors. The eigenvalues reflect the importance of each principal component, while the eigenvectors represent the direction of the principal component. And selecting the main component characteristics according to the magnitude of the characteristic values. Typically, the first few principal components with the largest eigenvalues may be selected. These principal components contain most of the data change information. And then, performing dimension reduction mapping on the fusion multidimensional pixel matrix according to the selected principal component characteristics. The dimension-reduction map maps the multi-dimensional data to a low-dimensional space to reduce the complexity of the data. And carrying out spectral feature extraction and spatial distribution feature extraction on the target pixel data after the dimension reduction. Spectral feature extraction involves extracting information of optical characteristics or bands from each pixel. Spatial distribution feature extraction involves analyzing the spatial relationship and distribution between pixels. These feature extraction methods may choose different techniques and algorithms depending on the particular problem and application. And finally, carrying out data reconstruction and model construction on the target dimension-reduction pixel data according to the extracted spectral characteristics and the spatial distribution characteristics. This may employ regression, machine learning, or deep learning methods, recombining features to reconstruct the original data or build a model for further analysis, visualization, or prediction. For example, assume that a server is using endoscopic multispectral imaging techniques to study skin lesions. The server collects multi-dimensional pixel matrix data for a plurality of channels, each representing a different skin characteristic. First, the server calculates the pixel average for each channel and centers the entire matrix to better process the data. Then, the server obtains the principal component features and corresponding feature vectors by calculating covariance matrices and performing feature value decomposition. Next, the server selects the first few principal component features that contain the change information for most of the data. Then, the server performs dimension reduction mapping on the multidimensional pixel matrix, and maps the data to a low-dimension space so as to reduce the dimension of the data. After the dimension reduction, the server performs spectral feature and spatial distribution feature extraction. The spectral features include color information of the skin, while the spatial distribution features include shape and distribution of skin lesions. These features may help the server better understand the nature of skin lesions. Finally, the server reconstructs the data or constructs a skin lesion model using the extracted features to assist the physician in disease analysis and diagnosis. This model can be used to predict the nature or trend of the lesions, thereby guiding treatment decisions. The method can improve the application of the endoscopic multispectral imaging technology in the diagnosis of skin lesions.
In a specific embodiment, as shown in fig. 4, the process of executing step S304 may specifically include the following steps:
S401, performing spectral response feature extraction on the fusion multi-dimensional pixel matrix to obtain spectral features, and performing pixel spatial distribution feature extraction on the fusion multi-dimensional pixel matrix to obtain spatial distribution features;
s402, carrying out data reconstruction on target dimensionality reduction pixel data to obtain an initial spectrum model, wherein each pixel point in the initial spectrum model represents data combined with spectrum information of different channels;
S403, carrying out model construction on the initial spectrum model according to the spectrum characteristics and the space distribution characteristics to obtain a spectrum three-dimensional data model.
Specifically, the server first performs spectral response feature extraction on the fused multi-dimensional pixel matrix. The aim is to extract spectral information from the multiple channels of each pixel in order to better understand the relationship and optical properties between the different channels. Spectral response characteristics may include color, band response, and the like. And then, extracting pixel space distribution characteristics of the fusion multidimensional pixel matrix. This step focuses on the spatial relationship and distribution between pixels in order to analyze the shape, structure and arrangement of the object. The pixel spatial distribution characteristics may include texture, shape, edges, etc. information. After the spectral response characteristics and the pixel space distribution characteristics are extracted, the target dimension-reduced pixel data is subjected to data reconstruction. The previously dimensionality-reduced pixel data is recombined to restore the original multi-channel information. The goal of the data reconstruction is to restore the original data as accurately as possible to preserve detailed information about the object. And finally, constructing a model according to the spectral response characteristics, the pixel space distribution characteristics and the multichannel data after data reconstruction. The model may employ various algorithms, such as a machine learning model, a deep learning model, or a statistical model, for further analysis and visualization of the data. The model construction process should take into account how best to use the spectral and spatial information to describe an object or scene. For example, assume that a server is using endoscopic multispectral imaging techniques to study gastrointestinal lesions. The server collects multi-dimensional pixel matrix data for a plurality of channels, each channel representing a different wavelength range. First, the server extracts spectral response features from the multi-dimensional pixel matrix, i.e., analyzes the wavelength ranges of the different channels to determine which channels are responsive to a particular biological tissue or lesion. This helps identify the optical properties of the lesion. Next, the server performs pixel spatial distribution feature extraction to analyze the shape, size, and distribution of lesions. This includes detecting information such as edges, texture, and density of lesions. The server then reconstructs the data, recombining the previously dimensionality reduced pixel data to recover the original multi-channel information. This helps to preserve detailed information about the lesions. And finally, the server uses the spectral response characteristic, the pixel space distribution characteristic and the multichannel data after data reconstruction to carry out model construction. The server uses a deep learning model, such as a convolutional neural network, to construct a model that automatically identifies and classifies different types of gastrointestinal lesions.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Inputting a preset image characteristic analysis model into the spectrum three-dimensional data model, wherein the image characteristic analysis model comprises the following components: convolutional neural networks and recurrent neural networks;
(2) Carrying out convolution processing on the spectrum three-dimensional data model through a convolution neural network to obtain model surface characteristics;
(3) Extracting time sequence relation features of the spectrum three-dimensional data model through a cyclic neural network to obtain model time sequence features;
(4) And generating an image characteristic analysis result of the spectrum three-dimensional data model according to the model surface characteristics and the model time sequence characteristics.
Specifically, first, a spectral stereoscopic data model is input into a preset image feature analysis model. The image feature analysis model is a complex neural network structure, and generally comprises a convolutional neural network and a cyclic neural network, which are used for extracting different types of features. The model aims to map the spectral volumetric data model to a high-dimensional feature space in order to better understand the content of the image. And carrying out convolution processing on the spectrum stereo data model through a convolution neural network so as to obtain model surface characteristics. Convolutional neural networks are good at capturing local features of images in image processing, which is very useful for surface feature analysis of spectral volumetric data models. The convolution operation may identify information such as edges, textures, and shapes in the image. And extracting time sequence relation features of the spectrum three-dimensional data model through a cyclic neural network. The recurrent neural network is excellent in processing sequence data and time-series data. Here, it may capture timing relationships in the data model, such as time variations or dynamic features of the spectral data. These timing features can be used to better understand the dynamics in the spectral stereo data model. And finally, generating an image characteristic analysis result of the spectrum three-dimensional data model according to the model surface characteristics and the model time sequence characteristics. The result may be a high-dimensional feature vector that contains a deep understanding of the data model. These features may be used for different tasks such as classification, segmentation, detection, etc. For example, assume that the server is researching the use of endoscopic multispectral imaging in diagnosis of pulmonary disease. The server has acquired a spectral stereoscopic data model of the lung tissue and wishes to use image feature analysis to extract more information. First, a spectroscopic stereo data model of the lung tissue is input into an image feature analysis model. This model includes convolutional neural networks and recurrent neural networks. Convolutional neural networks are responsible for capturing surface features of lung tissue, such as tumor size, shape, and distribution. Through convolution operations, the model may identify abnormal regions in the lung tissue. The recurrent neural network is used for extracting the time sequence relation characteristics of the lung tissues. This is very important for detecting dynamic changes in lung tissue, such as changes in blood supply or the evolution of lesions. And finally, generating an image characteristic analysis result of the lung tissue according to the output of the convolutional neural network and the cyclic neural network. These features may be used for tasks such as classification of lung disease, lesion segmentation or prediction of disease progression.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Decoding the image characteristic analysis result and the spectrum stereo data model to obtain image characteristic decoding data and data model decoding data;
(2) Performing three-dimensional mapping on the image feature decoding data and the data model decoding data to obtain a target three-dimensional model;
(3) And carrying out pixel value interpolation and enhancement on the target three-dimensional model to obtain the three-dimensional imaging data of the endoscope.
Specifically, first, the image feature analysis result and the spectral stereoscopic data model are input into the corresponding decoder to obtain image feature decoded data and data model decoded data. The objective is to restore the original image features and data models for further processing. Next, the image feature decoded data and the data model decoded data are three-dimensionally mapped. Two-dimensional image data and data models are mapped into three-dimensional space. This can be accomplished by using three-dimensional reconstruction algorithms and geometric principles. In this step, the spatial distribution and shape information of the data model needs to be considered to ensure accurate three-dimensional mapping. The generated three-dimensional model contains incomplete or non-uniform pixel value data. Therefore, pixel value interpolation and enhancement are required to fill in missing pixel values and enhance image quality. This may be achieved by various image processing techniques such as bicubic interpolation, histogram equalization, and denoising. Finally, the three-dimensional model and the enhanced pixel value data generated by the steps can obtain three-dimensional imaging data of the endoscope. These data can be used for display and analysis of the endoscopic imaging device to provide a more comprehensive, clear endoscopic image for the physician, facilitating diagnosis and treatment of disease. For example, assume that a server is using endoscopic multispectral imaging techniques to diagnose gastric disease. The server has collected multispectral data, and has constructed the three-dimensional data model of complicated spectrum through image characteristic analysis and data model. First, the server inputs the image feature analysis result and the data model into the corresponding decoder to obtain image feature decoded data and data model decoded data. After decoding, the server obtains the original spectral information and data model. The server then performs a three-dimensional mapping to map the two-dimensional image data and the data model into a three-dimensional space, taking into account the complex structure of the stomach tissue. This ensures that the server gets an accurate three-dimensional model. Then, the server performs pixel value interpolation and enhancement to fill in the existing pixel value deficiency and improve the quality of the image. This may help the physician see the details of the stomach tissue more clearly. Finally, the server obtains endoscopic three-dimensional imaging data that can be displayed on the endoscopic device that can be used by the physician to make diagnostic and therapeutic decisions on the disease. Thus, the endoscope multispectral imaging technology can provide more information and visualization tools for medical diagnosis, and is beneficial to improving early detection and treatment effects of diseases.
The multi-path imaging method of the endoscope multi-spectrum in the embodiment of the present invention is described above, and the multi-path imaging device of the endoscope multi-spectrum in the embodiment of the present invention is described below, referring to fig. 5, an embodiment of the multi-path imaging device of the endoscope multi-spectrum in the embodiment of the present invention includes:
The acquisition module 501 is configured to acquire spectral images of a plurality of channels through a plurality of image sensors in an endoscope, so as to obtain first spectral image data of each channel, where each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
The construction module 502 is configured to obtain spectral distribution data of each channel, construct a light source model of each channel according to the spectral distribution data, and perform spectral correction on the first spectral image data according to the light source model to obtain a plurality of second spectral image data;
The fusion module 503 is configured to perform multidimensional data fusion on the plurality of second spectral image data, generate a multidimensional data stereo, and perform weighting processing on the multidimensional data stereo to obtain a fused multidimensional pixel matrix;
The reconstruction module 504 is configured to perform data dimension reduction and feature extraction on the fused multi-dimensional pixel matrix to obtain target dimension reduction pixel data, and perform data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model;
The analysis module 505 is configured to perform image feature analysis on the spectrum three-dimensional data model through a preset image feature analysis model, so as to obtain an image feature analysis result;
And the imaging module 506 is used for performing three-dimensional imaging on the image characteristic analysis result and the spectrum stereo data model to obtain three-dimensional imaging data of the endoscope.
Acquiring first spectral image data of each channel by a plurality of image sensors in the endoscope through cooperation of the components; constructing a light source model of each channel and performing spectrum correction to obtain a plurality of second spectrum image data; multidimensional data fusion is carried out on the plurality of second spectrum image data, multidimensional data three-dimensional is generated, and weighting treatment is carried out, so that a fused multidimensional pixel matrix is obtained; performing data dimension reduction and feature extraction to obtain target dimension reduction pixel data, and performing data reconstruction and model construction to obtain a spectrum three-dimensional data model; performing image feature analysis to obtain an image feature analysis result; the invention can acquire the spectral distribution of tissues under different wavelengths by utilizing the spectral information of a plurality of channels, thereby providing richer biological information. By the light source correction, the influence of the light source fluctuation on the image quality can be reduced. The multi-dimensional data fusion and feature extraction make the image information more representative and interpretable. By applying the deep learning technology, the convolutional neural network and the cyclic neural network, more complex features can be extracted from the spectrum three-dimensional data model, and then the quality of multi-path imaging of the endoscope multispectral is improved.
The multi-path imaging device for endoscope multi-spectrum in the embodiment of the present invention is described in detail from the point of view of modularized functional entities in the above fig. 5, and the multi-path imaging device for endoscope multi-spectrum in the embodiment of the present invention is described in detail from the point of view of hardware processing in the following.
Fig. 6 is a schematic structural diagram of an endoscopic multispectral multi-channel imaging device 600 according to an embodiment of the present invention, where the endoscopic multispectral multi-channel imaging device 600 may have relatively large differences due to configuration or performance, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage mediums 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the multi-channel imaging apparatus 600 for endoscope multispectral. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the endoscopic multispectral multiplexed imaging device 600.
The endoscopic multispectral multiplexed imaging device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the endoscopic multispectral multiplex imaging device shown in fig. 6 is not limiting of the endoscopic multispectral multiplex imaging device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
The present invention also provides an endoscopic multispectral multichannel imaging device, which includes a memory and a processor, wherein the memory stores computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the endoscopic multispectral multichannel imaging method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, which when executed on a computer, cause the computer to perform the steps of the endoscopic multispectral multichannel imaging method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. An endoscopic multispectral multipath imaging method, characterized in that the endoscopic multispectral multipath imaging method comprises the following steps:
Acquiring spectral images of a plurality of channels through a plurality of image sensors in an endoscope respectively to obtain first spectral image data of each channel, wherein each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
Acquiring spectrum distribution data of each channel, constructing a light source model of each channel according to the spectrum distribution data, and respectively carrying out spectrum correction on the first spectrum image data according to the light source model to obtain a plurality of second spectrum image data;
multidimensional data fusion is carried out on the plurality of second spectrum image data to generate multidimensional data stereo, and the multidimensional data stereo is weighted to obtain a fused multidimensional pixel matrix;
Performing data dimension reduction and feature extraction on the fusion multi-dimensional pixel matrix to obtain target dimension reduction pixel data, and performing data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model; the method specifically comprises the following steps: respectively calculating the pixel average value of each channel in the fusion multi-dimensional pixel matrix, and carrying out centering treatment on the fusion multi-dimensional pixel matrix according to the pixel average value to obtain a centering pixel matrix; calculating a covariance matrix corresponding to the centralized pixel matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain eigenvalues and corresponding eigenvectors; selecting main component characteristics according to the magnitude of the characteristic values, and performing dimension reduction mapping on the fused multi-dimensional pixel matrix according to the main component characteristics to obtain target dimension reduction pixel data; spectral response feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spectral features, and pixel spatial distribution feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spatial distribution features; carrying out data reconstruction on the target dimensionality reduction pixel data to obtain an initial spectrum model, wherein each pixel point in the initial spectrum model represents data combined with spectrum information of different channels; according to the spectral characteristics and the spatial distribution characteristics, carrying out model construction on the initial spectral model to obtain a spectral three-dimensional data model;
Carrying out image feature analysis on the spectrum three-dimensional data model through a preset image feature analysis model to obtain an image feature analysis result;
and carrying out three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.
2. The method of multi-channel imaging of an endoscope according to claim 1, wherein the steps of obtaining spectral distribution data of each channel, constructing a light source model of each channel according to the spectral distribution data, and performing spectral correction on the first spectral image data according to the light source model to obtain a plurality of second spectral image data, respectively, include:
Acquiring spectrum distribution data of each channel, and carrying out segmentation processing on the spectrum distribution data to obtain a plurality of spectrum segmentation data corresponding to each spectrum distribution data;
respectively performing spline fitting on the plurality of spectrum segment data through a preset cubic polynomial to obtain a plurality of spline segments;
Smoothing the spline segments to obtain a light source model of each channel;
Carrying out light intensity prediction on the channels according to the light source model to obtain light intensity prediction data of each channel;
and acquiring the light intensity real data of the first spectrum image data, and correcting the light intensity prediction data and the light intensity real data in a spectrum proportion to obtain a plurality of second spectrum image data.
3. The multi-channel imaging method of endoscope multi-spectrum according to claim 1, wherein said performing multi-dimensional data fusion on said plurality of second spectrum image data to generate a multi-dimensional data volume, and performing weighting processing on said multi-dimensional data volume to obtain a fused multi-dimensional pixel matrix, comprises:
Performing image alignment and brightness equalization on the plurality of second spectrum image data to obtain a plurality of standard spectrum image data;
stacking the plurality of standard spectrum image data according to the channel sequence of the plurality of channels to generate a multi-dimensional data stereo;
And respectively carrying out weight distribution on the channels to obtain a first channel weight of each channel, and carrying out pixel weighted fusion on the multi-dimensional data three-dimensional according to the first channel weight to obtain a fused multi-dimensional pixel matrix.
4. The multi-channel imaging method of endoscope multispectral according to claim 1, wherein the performing image feature analysis on the spectral stereo data model by a preset image feature analysis model to obtain an image feature analysis result comprises:
Inputting a preset image characteristic analysis model to the spectrum three-dimensional data model, wherein the image characteristic analysis model comprises: convolutional neural networks and recurrent neural networks;
carrying out convolution processing on the spectrum three-dimensional data model through the convolution neural network to obtain model surface characteristics;
extracting time sequence relation features of the spectrum three-dimensional data model through the cyclic neural network to obtain model time sequence features;
And generating an image characteristic analysis result of the spectrum three-dimensional data model according to the model surface characteristic and the model time sequence characteristic.
5. The multi-path imaging method of endoscope multispectral according to claim 1, wherein the three-dimensional imaging of the image feature analysis result and the spectral stereo data model to obtain three-dimensional imaging data of the endoscope comprises:
Decoding the image characteristic analysis result and the spectrum stereo data model to obtain image characteristic decoding data and data model decoding data;
performing three-dimensional mapping on the image feature decoding data and the data model decoding data to obtain a target three-dimensional model;
And carrying out pixel value interpolation and enhancement on the target three-dimensional model to obtain the three-dimensional imaging data of the endoscope.
6. An endoscopic multispectral multiplexed imaging device, the endoscopic multispectral multiplexed imaging device comprising:
The acquisition module is used for respectively acquiring spectral images of a plurality of channels through a plurality of image sensors in the endoscope to obtain first spectral image data of each channel, wherein each image sensor corresponds to one channel, and each channel corresponds to a preset wavelength range;
The construction module is used for acquiring the spectrum distribution data of each channel, constructing a light source model of each channel according to the spectrum distribution data, and carrying out spectrum correction on the first spectrum image data according to the light source model to obtain a plurality of second spectrum image data;
The fusion module is used for carrying out multidimensional data fusion on the plurality of second spectrum image data to generate multidimensional data stereo, and carrying out weighting treatment on the multidimensional data stereo to obtain a fused multidimensional pixel matrix;
The reconstruction module is used for carrying out data dimension reduction and feature extraction on the fusion multidimensional pixel matrix to obtain target dimension reduction pixel data, and carrying out data reconstruction and model construction on the target dimension reduction pixel data to obtain a spectrum three-dimensional data model; the method specifically comprises the following steps: respectively calculating the pixel average value of each channel in the fusion multi-dimensional pixel matrix, and carrying out centering treatment on the fusion multi-dimensional pixel matrix according to the pixel average value to obtain a centering pixel matrix; calculating a covariance matrix corresponding to the centralized pixel matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain eigenvalues and corresponding eigenvectors; selecting main component characteristics according to the magnitude of the characteristic values, and performing dimension reduction mapping on the fused multi-dimensional pixel matrix according to the main component characteristics to obtain target dimension reduction pixel data; spectral response feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spectral features, and pixel spatial distribution feature extraction is carried out on the fusion multi-dimensional pixel matrix to obtain spatial distribution features; carrying out data reconstruction on the target dimensionality reduction pixel data to obtain an initial spectrum model, wherein each pixel point in the initial spectrum model represents data combined with spectrum information of different channels; according to the spectral characteristics and the spatial distribution characteristics, carrying out model construction on the initial spectral model to obtain a spectral three-dimensional data model;
The analysis module is used for carrying out image feature analysis on the spectrum three-dimensional data model through a preset image feature analysis model to obtain an image feature analysis result;
and the imaging module is used for carrying out three-dimensional imaging on the image characteristic analysis result and the spectrum three-dimensional data model to obtain three-dimensional imaging data of the endoscope.
7. An endoscopic multispectral multiplex imaging device, the endoscopic multispectral multiplex imaging device comprising: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invokes the instructions in the memory to cause the endoscopic multispectral multiplexed imaging apparatus to perform the endoscopic multispectral multiplexed imaging method of any one of claims 1-5.
8. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the endoscopic multispectral multiplexed imaging method of any one of claims 1-5.
CN202311407775.6A 2023-10-27 2023-10-27 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum Active CN117152362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311407775.6A CN117152362B (en) 2023-10-27 2023-10-27 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311407775.6A CN117152362B (en) 2023-10-27 2023-10-27 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum

Publications (2)

Publication Number Publication Date
CN117152362A CN117152362A (en) 2023-12-01
CN117152362B true CN117152362B (en) 2024-05-28

Family

ID=88884607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311407775.6A Active CN117152362B (en) 2023-10-27 2023-10-27 Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum

Country Status (1)

Country Link
CN (1) CN117152362B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893871B (en) * 2024-03-14 2024-07-05 深圳市日多实业发展有限公司 Spectrum segment fusion method, device, equipment and storage medium
CN118196033B (en) * 2024-03-18 2024-08-06 江苏商贸职业学院 Non-contact high-precision measurement method and system based on machine vision fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104541153A (en) * 2012-07-02 2015-04-22 新加坡国立大学 Methods related to real-time cancer diagnostics at endoscopy utilizing fiber-optic raman spectroscopy
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN115202030A (en) * 2022-07-29 2022-10-18 深圳英美达医疗技术有限公司 Method and device for calibrating endoscope light source
WO2022257946A1 (en) * 2021-06-07 2022-12-15 上海微觅医疗器械有限公司 Multispectral imaging system and method, and storage medium
CN116229189A (en) * 2023-05-10 2023-06-06 深圳市博盛医疗科技有限公司 Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN117204796A (en) * 2023-11-09 2023-12-12 哈尔滨海鸿基业科技发展有限公司 Multispectral imaging method and device of abdominal cavity endoscope

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5932748B2 (en) * 2013-09-27 2016-06-08 富士フイルム株式会社 Endoscope system
JP2015211727A (en) * 2014-05-01 2015-11-26 オリンパス株式会社 Endoscope device
WO2021226493A1 (en) * 2020-05-08 2021-11-11 The Regents Of The University Of California Label-free real-time hyperspectral endoscopy for molecular-guided cancer surgery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104541153A (en) * 2012-07-02 2015-04-22 新加坡国立大学 Methods related to real-time cancer diagnostics at endoscopy utilizing fiber-optic raman spectroscopy
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
WO2022257946A1 (en) * 2021-06-07 2022-12-15 上海微觅医疗器械有限公司 Multispectral imaging system and method, and storage medium
CN115202030A (en) * 2022-07-29 2022-10-18 深圳英美达医疗技术有限公司 Method and device for calibrating endoscope light source
CN116229189A (en) * 2023-05-10 2023-06-06 深圳市博盛医疗科技有限公司 Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN117204796A (en) * 2023-11-09 2023-12-12 哈尔滨海鸿基业科技发展有限公司 Multispectral imaging method and device of abdominal cavity endoscope

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多小波变换的医学图像融合方法研究;孙宏伟;周振环;;计算机工程与应用;20060811(第23期);215-219+228 *
多光谱融合手术引导系统关键技术研究;董琰彪;《硕士学位论文全文数据库 医药卫生科技》;20190215(第2期);1-67 *

Also Published As

Publication number Publication date
CN117152362A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN117152362B (en) Multi-path imaging method, device, equipment and storage medium for endoscope multi-spectrum
Clancy et al. Surgical spectral imaging
US11257213B2 (en) Tumor boundary reconstruction using hyperspectral imaging
US20150065803A1 (en) Apparatuses and methods for mobile imaging and analysis
JP4599520B2 (en) Multispectral image processing method
Zhang et al. Tongue image analysis
Fabelo et al. A novel use of hyperspectral images for human brain cancer detection using in-vivo samples
CA3087702A1 (en) Computer diagnosis support program product and diagnosis support method
Haddad et al. Image analysis model for skin disease detection: framework
US11701015B2 (en) Computer-implemented method and system for direct photoplethysmography (PPG) with multiple sensors
EP3716136A1 (en) Tumor boundary reconstruction using hyperspectral imaging
JP7087390B2 (en) Diagnostic support device, image processing method and program
EP4216808A1 (en) Acne severity grading methods and apparatuses
JP4649965B2 (en) Health degree determination device and program
US11583198B2 (en) Computer-implemented method and system for contact photoplethysmography (PPG)
Akbari et al. Hyperspectral imaging and diagnosis of intestinal ischemia
WO2015111308A1 (en) Three-dimensional medical image display control device, operation method thereof, and three-dimensional medical image display control program
EP3995081A1 (en) Diagnosis assisting program
de Moura et al. Skin lesions classification using multichannel dermoscopic Images
JP6585623B2 (en) Biological information measuring device, biological information measuring method, and biological information measuring program
TWI803223B (en) Method for detecting object of esophageal cancer in hyperspectral imaging
JP7449004B2 (en) Hyperspectral object image detection method using frequency bands
Cruz-Guerrero et al. Hyperspectral Imaging for Cancer Applications
CN116502062A (en) Self-adaptive reconstruction method of non-contact pulse wave signals
CN118365656A (en) U-Net network esophagus segmentation method based on CT image improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant