CN117314754B - Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope - Google Patents

Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope Download PDF

Info

Publication number
CN117314754B
CN117314754B CN202311597010.3A CN202311597010A CN117314754B CN 117314754 B CN117314754 B CN 117314754B CN 202311597010 A CN202311597010 A CN 202311597010A CN 117314754 B CN117314754 B CN 117314754B
Authority
CN
China
Prior art keywords
image
information
image information
hyperspectral
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311597010.3A
Other languages
Chinese (zh)
Other versions
CN117314754A (en
Inventor
黎俊均
朱能杰
许讯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Insighters Medical Technology Co ltd
Original Assignee
Shenzhen Insighters Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Insighters Medical Technology Co ltd filed Critical Shenzhen Insighters Medical Technology Co ltd
Priority to CN202311597010.3A priority Critical patent/CN117314754B/en
Publication of CN117314754A publication Critical patent/CN117314754A/en
Application granted granted Critical
Publication of CN117314754B publication Critical patent/CN117314754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00165Optical arrangements with light-conductive means, e.g. fibre optics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0084Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Endoscopes (AREA)

Abstract

The invention relates to the technical field of optical imaging, in particular to a double-shot hyperspectral image imaging method and system and a double-shot hyperspectral endoscope. According to the scheme, firstly, image information of two visual angles of a part to be observed is obtained, then the image information is subjected to feature extraction, the feature information of each image is obtained, then the image information is subjected to frame alignment according to the feature information, and finally the image information after the frame alignment is subjected to information fusion, so that a super-resolution image is obtained. By adopting the scheme, the images of different visual angles at the position can be acquired through the double-shot hyperspectral endoscope according to the problems existing at the tissue to be analyzed, the spectral images of the same scene and different visual angles at the position are analyzed and modulated, the image information of the same scene and different visual angles is fused, the resolution ratio of the output hyperspectral image is effectively improved, and the display effect of the double-shot fused hyperspectral image is effectively improved.

Description

Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope
Technical Field
The invention relates to the technical field of optical imaging, in particular to a double-shot hyperspectral image imaging method and system and a double-shot hyperspectral endoscope.
Background
In the use of the traditional white light endoscope, only lesions with relatively obvious changes in morphology or color can be found, but diagnosis is difficult for tiny, flat early cancers and abnormal hyperplasia, and even missed diagnosis is often caused. In this regard, studies on dye-free staining techniques continue to be conducted, which is extremely important for screening early-stage tumors and cancerous lesions.
In the existing optical staining, only a narrow-band spectrum with a limited band can be extracted through a limited number of groups of filters for analysis; in the current electronic staining technology, although the ordinary white light endoscope image can be analyzed through an algorithm and the narrow-band spectrum with a given wavelength can be estimated, the method has the defects that the accuracy of the algorithm is easily limited, errors can be caused to the calculation result, and the brightness of the estimated processed image is relatively dim because the analysis can only be performed through the ordinary white light endoscope image. In addition, due to the small design of the endoscope head space, only a small image sensor can be placed, so that the image display effect is fixed, and the image display effect cannot be improved. Therefore, there is a need to design a method and apparatus that can improve the image display effect without being affected by the accuracy of the algorithm.
Disclosure of Invention
The double-shot hyperspectral image imaging method and system and the double-shot hyperspectral endoscope provided by the invention effectively solve the problem of poor image display effect in the existing endoscope detection.
According to a first aspect, in one embodiment, a dual-shot hyperspectral image imaging method is provided, including:
acquiring image information of two visual angles of a part to be observed;
extracting the characteristics of the image information to obtain the characteristic information of each image;
performing frame alignment on the image information according to the characteristic information;
and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
In an implementation manner, the feature extraction of the image information to obtain feature information of each image respectively includes:
respectively extracting pixel points in the image information of the two visual angles, aligning the pixel points pixel by pixel, and constructing a corresponding graph;
and extracting the characteristic information of each image from each corresponding image.
In an implementation manner, the frame alignment of the image information according to the feature information includes:
estimating the pixel offset in the corresponding graph according to the characteristic information;
and carrying out pixel-by-pixel alignment on the image information according to the offset, and realizing frame alignment through the pixel-by-pixel alignment.
In an implementation manner, the information fusion of the frame-aligned image information to obtain a super-resolution image includes:
performing Demosaic processing on the image information after the frame alignment to obtain a channel complement information diagram;
and carrying out information fusion on the channel complement information graph by adopting a non-uniform interpolation method to obtain a super-resolution image.
In an implementation manner, the feature extraction of the image information to obtain feature information of each image respectively includes:
respectively extracting image blocks in the image information of the two visual angles;
matching the extracted plurality of image blocks;
and carrying out convolution operation on the matched image blocks to obtain multidimensional vectors of the image blocks corresponding to the two visual angles.
In an implementation manner, after obtaining the multidimensional vectors of the image blocks corresponding to the two views, the method further includes:
and carrying out convolution operation on the multidimensional vectors of the two visual angles by using an N1 dimensional feature matrix to respectively obtain N2 dimensional feature matrices of the two visual angles.
In an implementation manner, the information fusion of the frame-aligned image information includes:
performing fusion calculation on the N2-dimensional feature matrixes of different visual angles to obtain an N3-dimensional feature matrix;
and performing deconvolution operation on the N3-dimensional feature matrix to obtain a super-resolution image.
According to a second aspect, in one embodiment there is provided a dual-shot hyperspectral image imaging method comprising:
acquiring image information of two different angles of view;
extracting characteristic points of each piece of image information;
performing image size transformation on the characteristic points through a transformation matrix so as to perform image alignment on the image information;
and constructing a Laplacian pyramid model, fusing the aligned images through the Laplacian pyramid model to obtain a fused image, and outputting the fused image.
According to a third aspect, an embodiment provides a dual-shot hyperspectral image imaging system, comprising:
the acquisition module is used for acquiring image information of two visual angles of the part to be observed;
the feature extraction module is used for extracting features of the image information to obtain feature information of each image respectively;
the registration module is used for carrying out frame alignment on the image information according to the characteristic information;
and the fusion module is used for carrying out information fusion on the image information after the frame alignment so as to obtain a super-resolution image.
According to a fourth aspect, there is provided in one embodiment a dual-shot hyperspectral endoscope comprising: the device comprises a light source, at least two hyperspectral image sensors, an optical lens and a processor; the at least two hyperspectral image sensors have different viewing angles;
the light source is used for irradiating the part to be observed;
the optical lens is the same as the angle of view of the hyperspectral image sensor, and is used for transmitting the light from the part to be observed to the hyperspectral image sensor;
the hyperspectral image sensor is used for acquiring image information of a part to be observed and transmitting the image information to the processor;
the processor is configured to:
respectively carrying out feature extraction on the image information transmitted by each hyperspectral image sensor to obtain feature information of each image;
performing frame alignment on the image information according to the characteristic information;
and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
According to a fifth aspect, an embodiment provides a computer readable storage medium having stored thereon a program executable by a processor to implement the method described above.
According to the double-shot hyperspectral image imaging method/system/double-shot hyperspectral endoscope, firstly, image information of two visual angles of a part to be observed is obtained, then feature extraction is carried out on the image information to obtain feature information of each image, then frame alignment is carried out on the image information according to the feature information, and finally information fusion is carried out on the image information after frame alignment to obtain a super-resolution image. By adopting the scheme, the images of different visual angles at the position can be acquired through the double-shot hyperspectral endoscope according to the problems existing at the tissue to be analyzed, the spectral images of the same scene and different visual angles at the position are analyzed and modulated, the image information of the same scene and different visual angles is fused, the resolution ratio of the output hyperspectral image is effectively improved, and the display effect of the double-shot fused hyperspectral image is effectively improved.
Drawings
Fig. 1 is a flowchart of a dual-shot hyperspectral image imaging method according to the present embodiment;
fig. 2 is a flowchart of acquiring image feature information according to the present embodiment;
fig. 3 is a flowchart of frame alignment of image information according to the present embodiment;
fig. 4 is a flowchart of acquiring a super-resolution image according to the present embodiment;
FIG. 5 is a second flowchart of the dual-shot hyperspectral image imaging method according to the present embodiment;
fig. 6 is a block diagram of a dual-shot hyperspectral image imaging system according to the present embodiment;
fig. 7 is a schematic structural diagram of a dual-camera hyperspectral endoscope provided in the present embodiment;
fig. 8 is a schematic structural diagram of a cross section of a dual-camera hyperspectral endoscope provided in this embodiment.
Reference numerals: 10. an acquisition module; 20. a feature extraction module; 30. a registration module; 40. a fusion module; 50. a light source; 60. a hyperspectral image sensor; 70. an optical lens; 80. working pliers way.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The terms "coupled" and "connected," as used herein, are intended to encompass both direct and indirect coupling (coupling), unless otherwise indicated.
The prior endoscope is limited by the miniaturization influence of the endoscope lens end space, and only a smaller image sensor can be placed in the endoscope lens end space, and the resolution of the smaller image sensor is generally lower, so that the overall quality of the finally obtained image is poor. In addition, when the image acquired by the image sensor adopted by the existing endoscope is analyzed and processed, the image is also affected by algorithm estimation errors, so that the finally obtained image is relatively dim. In view of the above, the present application proposes a dual-shot hyperspectral image imaging method, a system and a dual-shot hyperspectral endoscope to solve the above problems.
The following describes the dual-shot hyperspectral image imaging method of the present application in detail.
As shown in fig. 1, the dual-camera hyperspectral image imaging method provided in this embodiment includes the following steps:
step 100: and acquiring image information of two visual angles of the part to be observed. Specifically, by adopting an optical lens with the same FOV (field of view) as the hyperspectral image sensor and an image sensor with the same resolution, the two hyperspectral image sensors collect observation information of two different visual angles of the same scene at the same time, and each frame of image of the different sensors at corresponding time contains mutually independent and complementary image information.
Step 200: and extracting the characteristics of the image information to obtain the characteristic information of each image.
Step 300: and carrying out frame alignment on the image information according to the characteristic information.
Step 400: and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
According to the double-shot hyperspectral image imaging method, firstly, image information of two visual angles of a part to be observed is obtained through an obtaining module, then the image information is subjected to feature extraction through a feature extraction module to obtain feature information of each image, then the image information is subjected to frame alignment through a registration module according to the feature information, and finally the image information subjected to frame alignment is subjected to information fusion through a fusion module to obtain a super-resolution image. By adopting the method, the images of different visual angles at the position can be acquired through the double-shot hyperspectral endoscope according to the problems existing at the tissue to be analyzed, the spectral images of different visual angles of the same scene at the position are analyzed and modulated, and the image information of the different visual angles of the same scene is fused, so that the resolution ratio of the output hyperspectral image is effectively improved, and the display effect of the double-shot fused hyperspectral image is effectively improved.
In this embodiment, an image sensor integrated with a spectrum recognition device is adopted, spectrum domain modulation of incident light is achieved through a silicon-based super surface, projection measurement from the spectrum domain to the electric domain can be completed by the sensor, spectrum reconstruction is performed by adopting an algorithm, and real-time spectrum imaging can be further achieved through large-scale array integration of the super surface. Based on the method, the real-time hyperspectral optical dyeing technology with the wavelength of 450-750 nm can be realized. Compared with the traditional optical dyeing, the tissue structure to be analyzed can be realized, and different wavelength spectrums can be freely selected for analysis, so that the method is not limited to a limited number of narrow-band optical filters; compared with the traditional electronic dyeing, the problem of algorithm estimation errors does not need to be considered, and because the micro spectrometer is integrated on each pixel of the image sensor, the spectrum information of each pixel point can be obtained quickly. In addition, the hyperspectral image sensor adopted in the embodiment of the application can automatically identify and determine the wave band or the area of light with the modulation wavelength to be analyzed through an image identification algorithm. According to the reconfigurable micro spectrometer surface of the image sensor, regular or irregular areas can be formed, spectrum analysis modulation of light with different wave bands is respectively carried out on different areas, and therefore the effects of automatically dividing and modulating different spectrum images according to the areas and analyzing different tissues of the same image are achieved.
The double-shot hyperspectral endoscope of the embodiment uses two adjacent hyperspectral image sensors, and performs image analysis fusion on the images of the two image sensor modules through an algorithm to improve the image quality of super resolution. In addition, the dual-camera hyperspectral endoscope can obtain spectral images with different wavelengths by analyzing and modulating the spectrum of the tissue to be analyzed through the hyperspectral image sensor. According to the effect, the spectral images with different wavelengths can be selected for combined analysis according to clinical needs. For example, 415nm light is easily absorbed by the surface blood vessels of the mucosa, 540nm light is easily absorbed by the capillaries of the submucosa, 555nm light is easily absorbed by deoxyhemoglobin, 569nm light is easily absorbed by carboxyhemoglobin, and 577nm light is easily absorbed by oxyhemoglobin. According to different observed tissues, the wavelength light selected and analyzed is different, the final imaging spectrogram is also different, and a user can combine the spectral imaging of a plurality of wave band lights to screen early tumors and cancerous lesions, so that richer reference information is provided for clinical disease diagnosis.
Referring to fig. 2, regarding step 200 in this embodiment: extracting the characteristics of the image information to obtain the characteristic information of each image, wherein the method specifically comprises the following steps:
step 201: and respectively extracting pixel points in the image information of the two visual angles, aligning the pixel points pixel by pixel, and constructing a corresponding graph.
Step 202: feature information of each image is extracted from each corresponding graph.
Specifically, two hyperspectral image sensors are used for collecting observation information of two different visual angles of the same scene at the same time, and each frame of image of the different sensors at corresponding time contains mutually independent and complementary image information. The method comprises the steps of respectively extracting pixel points in each image through a feature extraction module, specifically, performing concentration (convolution and polynomial multiplication) on image information to obtain an offset field, then expanding the offset field to obtain an offset, then inputting the image information and the offset into a deformable convolution to perform convolution operation, and finally outputting feature information.
Referring to fig. 3, regarding step 300 in this embodiment: the method for aligning the image information according to the characteristic information comprises the following steps:
step 301: and estimating the pixel offset in the corresponding graph according to the characteristic information.
Step 302: and carrying out pixel-by-pixel alignment on the image information according to the offset, and realizing frame alignment through the pixel-by-pixel alignment.
Specifically, after the offset is obtained in the above embodiment, since the offset is a vector, the offset is used to help the pixel in the image information to move, and the frame alignment is achieved through the pixel alignment. The general description of the deformable convolution in this embodiment is as follows:
in the method, in the process of the invention,for any pixel point in the image information, < >>For the offset of each pixel point in the convolution kernel relative to the center point, +.>Weights for corresponding positions for convolution kernel, +.>Is +.>The value of the element of the position,for the offset +.>Is the element value of the corresponding position in the image information.
Referring to fig. 4, regarding step 400 in this embodiment: information fusion is carried out on the image information after frame alignment to obtain a super-resolution image, and the method specifically comprises the following steps:
step 401: and performing Demosaic processing on the image information after frame alignment to obtain a channel complement information diagram.
Step 402: and carrying out information fusion on the channel complement information graph by adopting a non-uniform interpolation method to obtain a super-resolution image.
Specifically, the Demosaic algorithm is an important method of digital image processing, and is mainly to reconstruct an image sampled in a Bayer pattern to obtain a complete color image. In this embodiment, after frame alignment is achieved, demosaic (Demosaic) processing is performed on the image information to complement the RGB color channel information. Since the image information obtained after the image information frames are aligned in the above steps is also a low resolution image, RGB color channel information of the low resolution image can be complemented by Demosaic processing, and then the image after the color channel information is complemented is fused by using a non-uniform interpolation method. The non-uniform interpolation method is a reconstruction method of images based on reconstruction, and belongs to a spatial domain method. And fitting the abstracted LR image characteristic information with non-uniform distribution or obtaining the HR image characteristic information with uniform distribution by a non-uniform interpolation method so as to realize super-resolution image reconstruction.
In this embodiment, as another implementation manner of obtaining the super-resolution image, it may be further implemented as follows:
specifically, after the image information of different view angles of the part to be observed is obtained, respectively extracting image blocks in the image information of the two view angles, searching related image blocks by explicitly calculating similarity through matching the image blocks, and further realizing the registration of the characteristic information; and then carrying out convolution operation on each matched image block to obtain multidimensional vectors of the image blocks corresponding to the two visual angles. In practical application, the embodiment extracts a plurality of patches (image blocks) from images acquired by two hyperspectral image sensors for matching, and convolves each matched image block to obtain a multidimensional vector. Then, after multi-dimensional vectors of image blocks corresponding to the two view angles are obtained, carrying out convolution operation on the multi-dimensional vectors of the two view angles by using an N1-dimensional feature matrix to respectively obtain N2-dimensional feature matrices of the two view angles so as to realize nonlinear mapping transformation to the N2-dimensional feature matrix. And then, carrying out fusion calculation on the N2-dimensional feature matrixes of different visual angles to obtain an N3-dimensional feature matrix, and carrying out deconvolution operation on the N3-dimensional feature matrix to obtain a super-resolution image.
As shown in fig. 5, the other dual-camera hyperspectral image imaging method provided in this embodiment can also effectively improve the display effect of an image without losing resolution, and specifically includes the following steps:
step 500: image information of two different angles of view is acquired.
Step 600: feature points of each image information are extracted.
Step 700: the feature points are subjected to image size transformation by a transformation matrix to perform image alignment on the image information.
Step 800: and constructing a Laplacian pyramid model, and fusing the aligned images through the Laplacian pyramid model to obtain a fused image and outputting the fused image.
In this embodiment, two optical lenses with different FOVs (field angles), for example, 60 ° and 120 ° optical lenses, are used to obtain arbitrary image information with the field angle between 60 ° and 120 °, the image information is transferred to a hyperspectral image sensor, feature points in the image information are extracted, image alignment is performed by comparing sparse feature points, and then matching is performed according to different feature points of the two images, so as to obtain a transformation matrix between the two images for performing image size transformation alignment. The method comprises the steps of detecting feature points through two graphs, performing matching degree arrangement by using a hamming distance according to the matched feature points, reserving a part of the best matching, and calculating a homography matrix. According to the accurate homography matrix, mapping pixels of one picture to another picture to complete transformation alignment.
Specifically, in the setting process, the two hyperspectral image sensors are symmetrically distributed in space, so that the geometric constraint relation between the image and the observed object can be met according to a physical multi-view geometric method, parallax of each pixel point is obtained, and the images in the parallax direction are aligned through distortion transformation. Then constructing a Laplacian pyramid model according to the images to be fused, and fusing each layer of the pyramid according to the following formula:
in the method, in the process of the invention,is the i-th Gaussian pyramid; />Is the i+1th layer Gaussian pyramid; />The method comprises the steps of (1) forming an i-th layer Laplacian pyramid; />Representing a convolution function; />Representing a five-layer convolution.
Finally, by fusing the calculated output images, the mixed optical zoom function can be realized, and the mixed optical zoom effect of electronic zoom and optical zoom is considered, namely, a zoom process capable of zooming in and out is realized according to two different field angle imaging systems. Meanwhile, resolution is not lost, and the display effect of the image can be effectively improved.
As shown in fig. 6, the dual-camera hyperspectral image imaging system provided in this embodiment includes: an acquisition module 10, a feature extraction module 20, a registration module 30, and a fusion module 40. The acquisition module 10 is used for acquiring image information of two visual angles of the part to be observed; the feature extraction module 20 is configured to perform feature extraction on the image information, so as to obtain feature information of each image; the registration module 30 is used for aligning the image information according to the characteristic information; the fusion module 40 is configured to perform information fusion on the frame-aligned image information to obtain a super-resolution image.
Specifically, the functions specifically implemented by the acquisition module 10, the feature extraction module 20, the registration module 30, and the fusion module 40 in the dual-shot hyperspectral image imaging system of the present embodiment are described in detail in the above dual-shot hyperspectral image imaging method embodiment, which is not described herein in detail.
Referring to fig. 7 and 8, a dual-shot hyperspectral endoscope provided in this embodiment includes: a light source 50, at least two hyperspectral image sensors 60, an optical lens 70, and a processor. Wherein the viewing angles of at least two hyperspectral image sensors 60 are different; the light source 50 is used for irradiating the part to be observed; the optical lens 70 has the same angle of view as the hyperspectral image sensor 60, and the optical lens 70 is used for transmitting the light from the part to be observed to the hyperspectral image sensor 60; the hyperspectral image sensor 60 is used for acquiring image information of a part to be observed and transmitting the image information to the processor; the processor is used for: the image information transmitted by each hyperspectral image sensor 60 is subjected to characteristic extraction to obtain the characteristic information of each image; performing frame alignment on the image information according to the characteristic information; and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
Specifically, the light source 50 in this embodiment may use a rear light source 50 to illuminate, or may use a front light source 50 to illuminate. When a rear-end light source 50 is used, a light beam is generally transmitted by a certain length of optical fiber, and the light emitted by the rear-end light source 50 is transmitted to the front end for illumination, and the light source 50 must be a full-spectrum light source 50, such as a halogen lamp, a xenon lamp, and the like. When illuminating with a front-end light source 50, one or more cold light sources 50 are typically used and fixed to the front end for illumination, the light sources 50 must use a full spectrum light source 50, such as a full spectrum LED lamp. Fig. 8 is a layout design diagram of the head end of a dual-camera hyperspectral endoscope, wherein the upper two hyperspectral image sensors 60 form a whole super-resolution image imaging system, the left side and the right side are full-spectrum light source 50 windows for the light source 50 to pass through, and the middle is a working clamp 80 for the medical instrument to pass through.
The optical lens 70 may employ a zoom optical lens 70 or a fixed focus optical lens 70. The zoom optical lens 70 can realize adjustment of focal length within a certain range, and can realize fixed distance and amplification effect; the fixed focus lens is not variable in focus, and can only observe objects within a certain depth of field, but the aperture and the light quantity are larger than those of the zoom lens. Can be selected according to clinical needs.
In the endoscope of the embodiment, two adjacent hyperspectral image sensors 60 are adopted to capture the target image, the hyperspectral image sensor 60 is an image sensor integrated with a spectrum recognition device, the sensor can complete projection measurement from a spectrum domain to an electric domain by modulating the spectrum domain of incident light through a silicon-based hypersurface, then an algorithm is adopted to reconstruct the spectrum, and further real-time spectrum imaging can be realized through large-scale array integration of the hypersurface.
When in use, the working mode flow of the dual-shot hyperspectral endoscope of the embodiment is as follows: as shown in fig. 7, an observation object is illuminated by the light source 50, a part of light from the light source 50 is reflected from the observation object into the optical lens 70 and is collected on the hyperspectral image sensor 60, after the light is received by the light detection layer of the hyperspectral image sensor 60, the modulated spectrum is obtained after the light is modulated by the incident light, the light intensity of the modulated spectrum is sensed by the image sensing layer of the hyperspectral image sensor 60, and then the light intensity is converted into an electric signal through the photoelectric conversion module and is output to the processor, and the electric signal is converted into an image through the processing such as analog-digital conversion and filtering. The specific processing procedure of the processor refers to the super-resolution imaging method in the above embodiment, and this embodiment is not repeated here.
A computer-readable storage medium provided in this embodiment has a program stored thereon, the program being executable by a processor to implement the above-described method. In view of the above detailed description of the super-resolution imaging method in the above embodiments, the present embodiment is not described herein in detail.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (10)

1. A dual-shot hyperspectral endoscope, comprising: the device comprises a light source, at least two hyperspectral image sensors, an optical lens and a processor; the at least two hyperspectral image sensors have different viewing angles;
the light source is used for irradiating the part to be observed;
the optical lens is the same as the hyper-spectral image sensor in view angle, and is used for transmitting light from the part to be observed to the hyper-spectral image sensor, and the optical lens adopts a zoom optical lens or a fixed focus optical lens;
the hyperspectral image sensor is used for acquiring image information of a part to be observed and transmitting the image information to the processor; the two hyperspectral image sensors are positioned on the same plane;
the processor is configured to:
respectively carrying out convolution and polynomial multiplication on the image information transmitted by each hyperspectral image sensor to obtain an offset field, expanding the offset field to obtain an offset, inputting the image information and the offset into a deformable convolution to carry out convolution operation, and outputting characteristic information of each image;
performing frame alignment on the image information according to the characteristic information;
and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
2. A method of imaging a dual-shot hyperspectral image using the dual-shot hyperspectral endoscope of claim 1, comprising:
acquiring image information of two visual angles of a part to be observed;
extracting the characteristics of the image information to obtain the characteristic information of each image; multiplying the image information by a convolution sum polynomial to obtain an offset field, expanding the offset field to obtain an offset, inputting the image information and the offset into a deformable convolution to perform convolution operation, and outputting the characteristic information;
performing frame alignment on the image information according to the characteristic information;
and carrying out information fusion on the image information after frame alignment to obtain a super-resolution image.
3. The method for imaging a dual-shot hyperspectral image as claimed in claim 2, wherein the feature extraction of the image information to obtain feature information of each image respectively includes:
respectively extracting pixel points in the image information of the two visual angles, aligning the pixel points pixel by pixel, and constructing a corresponding graph;
and extracting the characteristic information of each image from each corresponding image.
4. The dual-shot hyperspectral image imaging method as claimed in claim 3, wherein the frame alignment of the image information based on the feature information includes:
estimating the pixel offset in the corresponding graph according to the characteristic information;
and carrying out pixel-by-pixel alignment on the image information according to the offset, and realizing frame alignment through the pixel-by-pixel alignment.
5. The method of imaging a dual-shot hyperspectral image as claimed in claim 4, wherein the performing information fusion on the frame-aligned image information to obtain a super-resolution image includes:
performing Demosaic processing on the image information after the frame alignment to obtain a channel complement information diagram;
and carrying out information fusion on the channel complement information graph by adopting a non-uniform interpolation method to obtain a super-resolution image.
6. The method for imaging a dual-shot hyperspectral image as claimed in claim 2, wherein the feature extraction of the image information to obtain feature information of each image respectively includes:
respectively extracting image blocks in the image information of the two visual angles;
matching the extracted plurality of image blocks;
and carrying out convolution operation on the matched image blocks to obtain multidimensional vectors of the image blocks corresponding to the two visual angles.
7. The method for imaging a dual-shot hyperspectral image as claimed in claim 6, wherein after obtaining the multidimensional vector of the image block corresponding to the two view angles, further comprises:
and carrying out convolution operation on the multidimensional vectors of the two visual angles by using an N1 dimensional feature matrix to respectively obtain N2 dimensional feature matrices of the two visual angles.
8. The method for imaging a dual-shot hyperspectral image as claimed in claim 7, wherein the information fusion of the frame-aligned image information includes:
performing fusion calculation on the N2-dimensional feature matrixes of different visual angles to obtain an N3-dimensional feature matrix;
and performing deconvolution operation on the N3-dimensional feature matrix to obtain a super-resolution image.
9. A method of imaging a dual-shot hyperspectral image using the dual-shot hyperspectral endoscope of claim 1, comprising:
acquiring image information of two different angles of view;
extracting characteristic points of each piece of image information;
performing image alignment by comparing sparse feature points, then matching according to different feature points of the two images, and solving a transformation matrix between different feature points of the two images for performing image size transformation alignment;
and constructing a Laplacian pyramid model, fusing the aligned images through the Laplacian pyramid model to obtain a fused image, and outputting the fused image.
10. A dual-shot hyperspectral image imaging system employing the dual-shot hyperspectral image imaging method of claim 2, comprising:
the acquisition module is used for acquiring image information of two visual angles of the part to be observed;
the characteristic extraction module is used for multiplying the image information by a convolution sum polynomial to obtain an offset field, expanding the offset field to obtain an offset, inputting the image information and the offset into a deformable convolution to perform convolution operation, and outputting characteristic information;
the registration module is used for carrying out frame alignment on the image information according to the characteristic information;
and the fusion module is used for carrying out information fusion on the image information after the frame alignment so as to obtain a super-resolution image.
CN202311597010.3A 2023-11-28 2023-11-28 Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope Active CN117314754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311597010.3A CN117314754B (en) 2023-11-28 2023-11-28 Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311597010.3A CN117314754B (en) 2023-11-28 2023-11-28 Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope

Publications (2)

Publication Number Publication Date
CN117314754A CN117314754A (en) 2023-12-29
CN117314754B true CN117314754B (en) 2024-03-19

Family

ID=89288719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311597010.3A Active CN117314754B (en) 2023-11-28 2023-11-28 Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope

Country Status (1)

Country Link
CN (1) CN117314754B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118154430A (en) * 2024-05-10 2024-06-07 清华大学 Space-time-angle fusion dynamic light field intelligent imaging method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112243091A (en) * 2020-10-16 2021-01-19 微创(上海)医疗机器人有限公司 Three-dimensional endoscope system, control method, and storage medium
CN114529477A (en) * 2022-02-28 2022-05-24 山东威高手术机器人有限公司 Binocular endoscope with high dynamic range, system and imaging method
WO2022111368A1 (en) * 2020-11-26 2022-06-02 上海健康医学院 Deep-learning-based super-resolution reconstruction method for microscopic image, and medium and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292258A1 (en) * 2010-05-28 2011-12-01 C2Cure, Inc. Two sensor imaging systems
CN111340866B (en) * 2020-02-26 2024-03-01 腾讯科技(深圳)有限公司 Depth image generation method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112243091A (en) * 2020-10-16 2021-01-19 微创(上海)医疗机器人有限公司 Three-dimensional endoscope system, control method, and storage medium
WO2022111368A1 (en) * 2020-11-26 2022-06-02 上海健康医学院 Deep-learning-based super-resolution reconstruction method for microscopic image, and medium and electronic device
CN114529477A (en) * 2022-02-28 2022-05-24 山东威高手术机器人有限公司 Binocular endoscope with high dynamic range, system and imaging method

Also Published As

Publication number Publication date
CN117314754A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
JP7087198B2 (en) Hybrid spectrum imager
US8401258B2 (en) Method to provide automated quality feedback to imaging devices to achieve standardized imaging data
Herrera et al. Development of a Multispectral Gastroendoscope to Improve the Detection of Precancerous Lesions in Digestive Gastroendoscopy
Du et al. A prism-based system for multispectral video acquisition
USRE47921E1 (en) Reflectance imaging and analysis for evaluating tissue pigmentation
US11468557B2 (en) Free orientation fourier camera
Ma et al. Acquisition of high spatial and spectral resolution video with a hybrid camera system
CN117314754B (en) Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope
US11779210B2 (en) Ophthalmic imaging apparatus and system
MXPA01009449A (en) System and method for calibrating a reflection imaging spectrophotometer.
CN115049666B (en) Endoscope virtual biopsy device based on color wavelet covariance depth map model
Holloway et al. Generalized assorted camera arrays: Robust cross-channel registration and applications
WO2018209703A1 (en) Method and system for snapshot multi-spectral light field imaging
Cai et al. Handheld four-dimensional optical sensor
US10401141B2 (en) Method and apparatus for obtaining a three-dimensional map of tympanic membrane thickness
WO2021245374A1 (en) Method and system for joint demosaicking and spectral signature estimation
US11619548B2 (en) Hybrid spectral imaging devices, systems and methods
Nouri et al. Calibration and test of a hyperspectral imaging prototype for intra-operative surgical assistance
CN113016006A (en) Apparatus and method for wide field hyperspectral imaging
CN110115557B (en) Hyperspectral endoscopic imaging device and imaging method
JP2010125288A (en) Method for creating image for melanoma diagnosis
CN113784658A (en) System and method for enhanced imaging of biological tissue
JP2006081654A (en) Image forming method, and device therefor
CN116849624B (en) 4 CMOS-based image sensor fluorescence imaging method and system
Clancy et al. A triple endoscope system for alignment of multispectral images of moving tissue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant