CN115153397A - Imaging method for endoscopic camera system and endoscopic camera system - Google Patents

Imaging method for endoscopic camera system and endoscopic camera system Download PDF

Info

Publication number
CN115153397A
CN115153397A CN202210689038.9A CN202210689038A CN115153397A CN 115153397 A CN115153397 A CN 115153397A CN 202210689038 A CN202210689038 A CN 202210689038A CN 115153397 A CN115153397 A CN 115153397A
Authority
CN
China
Prior art keywords
region
interest
image
determining
tissue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210689038.9A
Other languages
Chinese (zh)
Inventor
张俊鹏
李惠川
吴晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Wuhan Mindray Medical Technology Research Institute Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Wuhan Mindray Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd, Wuhan Mindray Medical Technology Research Institute Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202210689038.9A priority Critical patent/CN115153397A/en
Publication of CN115153397A publication Critical patent/CN115153397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000095Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope for image enhancement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00131Accessories for endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/043Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for fluorescence imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/046Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for infrared imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0638Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements providing two or more wavelengths
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • A61B1/0684Endoscope light sources using light emitting diodes [LED]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Endoscopes (AREA)

Abstract

An imaging method for an endoscopic camera system and an endoscopic camera system, the method comprising: performing endoscopic imaging through an endoscope to obtain an endoscopic image; determining a tissue region and an instrument region in the endoscopic image; determining a region of interest in the endoscopic image from the tissue region and the instrument region; the region of interest comprises a portion of the tissue region and/or a portion of the instrument region; adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest. According to the invention, the control parameters and/or image parameters of the endoscope camera system are adjusted according to the region of interest in the endoscope image, so that the region of interest can be preferentially ensured to have higher image quality.

Description

Imaging method for endoscopic imaging system and endoscopic imaging system
Technical Field
The present invention relates to the field of medical devices, and more particularly, to an imaging method for an endoscopic imaging system and an endoscopic imaging system.
Background
In surgical procedures, the quality of the endoscopic image, in particular the permeability of the field of view and the uniformity of the brightness, are of great importance to the operating physician. However, in the using process, due to the influence of factors such as reflection of organs and tissues or medical instruments, white gauze and the like, the images of the medical instruments often inevitably cause the problems of uneven brightness, overexposure, unclear focusing of operation areas and the like, so that the image information is lost, and hidden dangers are brought to the operation of doctors and the safety of patients.
Conventional endoscopic images typically adjust the image globally. For example, during exposure, the overall brightness is often reduced in order to clearly show the image details of the tissues of the overexposed area. However, when the operation region concerned by the doctor does not intersect with the overexposed tissue, the brightness and definition of the operation region are also reduced undoubtedly in this way, and the actual operation experience of the doctor is reduced.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A first aspect of embodiments of the present invention provides an imaging method for an endoscopic imaging system, including:
performing endoscopic imaging through an endoscope to obtain an endoscopic image;
determining a tissue region and an instrument region in the endoscopic image;
determining a region of interest in the endoscopic image from the tissue region and the instrument region; the region of interest comprises a portion of the tissue region and/or a portion of the instrument region;
adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest.
In one embodiment, adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises:
setting different metering weights for the interested region and the non-interested region, wherein the metering weight of the interested region is higher than that of the non-interested region; the region of non-interest is a region outside the region of interest in the endoscope image;
determining a photometric value of the endoscope image according to the photometric weight of the region of interest and the photometric weight of the region of no interest;
and adjusting the exposure parameters of the endoscope imaging according to the photometric value and the target photometric value.
In one embodiment, adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises: and performing focusing adjustment of endoscopic imaging according to the region of interest so as to focus on the region of interest.
In one embodiment, performing focus adjustment of endoscopic imaging according to the region of interest to focus on the region of interest comprises:
setting different focusing weights for the interested region and the non-interested region, wherein the focusing weight of the interested region is higher than that of the non-interested region; the non-interested region is a region outside the interested region in the endoscope image;
determining a focusing value of the endoscope image according to the focusing weight of the interested region and the focusing weight of the non-interested region;
and controlling the endoscope camera system to carry out focusing adjustment according to the focusing value.
In one embodiment, determining the tissue region and the instrument region in the endoscopic image comprises:
determining tissue and instrument regions in the endoscopic image based on a trained machine learning model or neural network model.
In one embodiment, the training process of the machine learning model and the neural network model includes at least:
acquiring sample image data, wherein instrument region and instrument type information are marked in the sample image data;
constructing a training data set according to the sample image data marked with the instrument region and the instrument type information;
and training based on the training data set to obtain the machine learning model or the neural network model.
In one embodiment, determining a region of interest in the endoscopic image from the tissue region and the instrument region comprises:
according to the machine learning model or the neural network model, determining an instrument area and instrument type information in the endoscope image, determining the end part of an execution end of a target instrument area according to the instrument type information, and determining the region of interest by taking the end part of the execution end as a center.
A second aspect of the embodiments of the present invention provides an imaging method for an endoscopic imaging system, including:
performing endoscopic imaging through an endoscope to obtain a first endoscopic image;
determining a region of interest in the first endoscopic image based on a trained machine learning model or a neural network model;
adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest.
In one embodiment, adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises:
setting different metering weights for the interested region and the non-interested region, wherein the metering weight of the interested region is higher than that of the non-interested region; the non-interested region is a region outside the interested region in the endoscope image;
determining a photometric value of the first endoscopic image according to the photometric weight of the region of interest and the photometric weight of the region of no interest;
adjusting exposure parameters of the endoscope imaging according to the photometric value and the target photometric value;
or, adjusting control parameters and/or image parameters of the endoscope camera system according to the region of interest to at least enhance the image quality of the region of interest, comprising: and performing focusing adjustment of endoscopic imaging according to the region of interest so as to focus on the region of interest.
In one embodiment, the determining the region of interest in the first endoscopic image based on the trained machine learning model or neural network model comprises:
identifying a non-tissue region in the first endoscope image based on a trained machine learning model or a neural network model, and obtaining class information of the non-tissue region, wherein the machine learning model or the neural network model is obtained by training an endoscope image data sample labeled with the class information of the non-material region;
determining the region of interest according to the category information of the non-tissue region;
alternatively, the determining a region of interest in the first endoscopic image based on the trained machine learning model or neural network model comprises:
identifying a tissue area in the first endoscope image based on a trained machine learning model or a neural network model, and obtaining category information of the tissue area, wherein the machine learning model or the neural network model is obtained by training an endoscope image data sample labeled with the category information of the tissue area;
and determining the region of interest according to the category information of the tissue region.
In one embodiment, determining the region of interest based on the category information of the non-tissue region comprises:
determining a target non-tissue region having target category information;
determining the region of interest around the target non-tissue region;
or, determining the region of interest according to the category information of the tissue region, including:
determining a target tissue region having target category information;
determining the region of interest around the target tissue region.
In one embodiment, determining the region of interest around the target non-tissue region comprises:
determining a contour of the region of interest according to the target non-tissue region;
determining an area located inside the contour as the region of interest;
alternatively, determining the region of interest around the target tissue region comprises:
determining a contour of the region of interest from the target tissue region;
determining an area located inside the contour as the region of interest.
In one embodiment, the category information of the non-tissue region includes instrument category information and auxiliary item category information, and the target non-tissue region includes a target instrument region having target instrument category information.
In one embodiment, the method further comprises:
and adjusting the control parameters and/or image parameters of the endoscope camera system according to the region of interest to obtain a second endoscope image.
A third aspect of the embodiments of the present invention provides an endoscopic camera system, including a light source, a light guide bundle, an endoscope, a cable, a camera host and a camera, where the light source is connected to the endoscope through the light guide bundle, one end of the camera is connected to the endoscope, the other end of the camera is connected to the camera host through the cable, and the camera host is configured to execute the method described above.
According to the imaging method for the endoscope camera system and the endoscope camera system, the control parameters and/or the image parameters of the endoscope camera system are adjusted according to the region of interest in the endoscope image, so that the region of interest can be preferentially ensured to have high image quality.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
In the drawings:
FIG. 1 shows a schematic block diagram of an endoscopic camera system according to an embodiment of the present invention;
fig. 2 shows a schematic flow diagram of an imaging method for an endoscopic camera system according to an embodiment of the present invention;
FIG. 3 shows a schematic flow diagram of an imaging method for an endoscopic camera system according to an embodiment of the present invention;
FIG. 4 shows a schematic diagram of an operating region according to an embodiment of the invention;
fig. 5 shows a schematic flow diagram of an imaging method for an endoscopic camera system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
Next, an endoscopic imaging system according to an embodiment of the present application will be described first with reference to fig. 1, and fig. 1 shows a schematic structural block diagram of an endoscopic imaging system 100 according to an embodiment of the present invention.
As shown in fig. 1, the endoscopic imaging system 100 includes a light source 110, a light guide bundle 120, an endoscope 130, a cable 140, an imaging host 150, and a camera 160, wherein the light source 110 is connected to the endoscope 130 through the light guide bundle 120, one end of the camera 160 is connected to the endoscope 130, and the other end of the camera 160 is connected to the imaging host 150 through the cable 140.
The light source 110 is used for providing an illumination light source to the observed part. The light source 110 may include a visible light source and a special light source. The light source may illustratively be an LED light source, may provide a plurality of monochromatic lights of different wavelength ranges, a combination of a plurality of monochromatic lights, or a broad spectrum white light source. The special light source may be a laser light source corresponding to a fluorescent reagent, for example, near infrared light. In some embodiments, a fluorescent reagent is injected into the site to be observed before imaging with the endoscopic camera system, and the fluorescent reagent absorbs the laser light generated by the laser light source to generate fluorescence.
The endoscope 130 includes a scope tube, an illumination optical path, and an imaging optical path, the illumination optical path and the imaging optical path being disposed within the scope tube. The front end of the lens tube is used for being inserted into a human body and extending into a part to be inspected, the rear end of the lens tube is provided with a mounting part, and the lens tube can be a hard tube or a soft tube. The illumination light path is butted with the light guide beam 120 and is used for irradiating the light generated by the light source 110 to the part to be detected of the target object; the imaging optical path is connected to one end of the camera 160, and is used to acquire an optical signal reflected or excited by the portion to be examined, and transmit the optical signal to the camera 160. Different light signals exist for different light sources, for example, when visible light irradiates a part to be inspected, the part to be inspected directly reflects the visible light to obtain a visible light signal, and when special light irradiates the part to be inspected, an excited special light signal is obtained.
The other end of the camera 160 is connected to the camera host 150 through the cable 140. The camera 160 includes an image sensor for converting an optical signal into an electrical signal, and the camera 160 may further perform amplification, filtering, and other processing on the electrical signal output by the sensor. The image signal generated by the camera 160 is transmitted to the camera host 150 through the cable 140 for processing. In some embodiments, camera 160 may also send image signals to camera host 150 by wireless transmission.
Further, the camera 160 further includes a focusing element for performing optical path shaping on the optical signal to adjust an imaging focal length of the camera 160.
In some embodiments, a processor is disposed in the camera host 150, and the processor acquires the image signal output by the camera 160 and processes the image signal to output an endoscopic image of the portion to be inspected, including a visible light image or a special light image.
Illustratively, the endoscopic camera system 100 further comprises a display 170, and the camera host 150 is connected to the display 170 through a video connection line for transmitting the endoscopic image to the display 170 for display.
It should be noted that fig. 1 is only an example of the endoscopic camera system 100, and does not constitute a limitation to the endoscopic camera system 100, and the endoscopic camera system 100 may include more or less components than those shown in fig. 1, or some components may be combined, or different components may be included, for example, the endoscopic camera system 100 may further include a dilator, a smoke control device, an input/output device, a network access device, and the like.
An imaging method for an endoscopic imaging system according to an embodiment of the present invention is described below with reference to fig. 2. The method may be implemented by the endoscopic camera system described with reference to fig. 1, and may specifically be implemented by a camera host of the endoscopic camera system. Fig. 2 is a schematic flow chart of an imaging method 200 for an endoscopic imaging system in an embodiment of the present invention, which specifically includes the following steps:
in step S210, performing endoscopic imaging through an endoscope to obtain an endoscopic image;
in step S220, a tissue region and an instrument region in the endoscopic image are determined;
in step S230, determining a region of interest in the endoscopic image according to the tissue region and the instrument region; the region of interest comprises a portion of the tissue region and/or a portion of the instrument region;
in step S240, control parameters and/or image parameters of the endoscopic camera system are adjusted according to the region of interest to enhance at least the image quality of the region of interest.
The imaging method 200 for the endoscope camera system according to the embodiment of the invention adjusts the control parameters and/or the image parameters of the endoscope camera system according to the region of interest in the endoscope image, so that the region of interest can be preferentially ensured to have higher image quality.
Specifically, in step S210, a to-be-observed portion is illuminated by a light source of the endoscope imaging system, an imaging light path of the endoscope acquires an optical signal reflected or excited by the to-be-observed portion and transmits the optical signal to the camera, the camera performs photoelectric conversion on the optical signal to obtain an image signal, and the image signal is transmitted to the camera host and processed by the camera host.
The endoscopic image may be a visible light image or a special light image. Specifically, when the part to be observed is illuminated by different light sources, different light signals are generated, for example, when the part to be observed is illuminated by a visible light source, the part to be observed directly reflects the visible light signal to obtain a visible light signal, and a visible light image can be obtained according to the visible light signal. When the special light source is adopted to irradiate the part to be observed, the special light source can be excited to generate a special light signal, and a special light image can be obtained according to the special light signal. The special light source can comprise infrared light, near infrared light, ultraviolet light, visible light and the like, the special light can excite the dye of the part to be observed to obtain an excited special light signal, and further obtain a special light image, and the special light image can comprise an infrared fluorescence image, an ultraviolet fluorescence image, a near infrared fluorescence image, a visible light fluorescence image and the like.
In step S220, a tissue region and an instrument region in the endoscopic image are determined. Illustratively, the instrument region may be a region of a surgical instrument, including but not limited to a scalpel, forceps, or the like. The device area may also include areas of ancillary items such as gauze. Illustratively, the instrument region may be a non-tissue region other than a tissue region in the endoscopic image. In determining the tissue region and the instrument region in the endoscopic image, the instrument region in the endoscopic image may be determined, and a background region other than the instrument region may be determined as the tissue region. Alternatively, a tissue region in the endoscopic image may be determined, and a background region other than the tissue region may be determined as the instrument region. Alternatively, the tissue region and the instrument region in the endoscopic image may be determined separately.
In addition, when the tissue region and the instrument region are determined in the endoscopic image, only the tissue region and the instrument region may be distinguished, or different types of tissue regions or different types of instrument regions may be specifically determined.
In some embodiments, the tissue region and the instrument region in the endoscopic image may be determined based on a trained machine learning model or neural network model. The machine learning model or the neural network model may be a model for determining an instrument region in the endoscopic image, and after the instrument region in the endoscopic image is determined based on the machine learning model or the neural network model, a region other than the instrument region may be determined as a tissue region. The machine learning model or the neural network model may be a model for determining a tissue region in the endoscopic image, and after the tissue region in the endoscopic image is determined from the machine learning model or the neural network model, a region other than the tissue region may be determined as the tissue region. Alternatively, different machine learning models or neural network models may be used to determine the instrument region and tissue region in the endoscopic image, respectively.
The machine learning model is mainly used for extracting image features of different areas in an endoscope image, classifying the image features by using a classifier, and determining a tissue area or an instrument area according to the category of the image features. Exemplarily, when the image features are extracted, image blocks of surrounding neighborhoods can be taken from each pixel point in the endoscope, and feature extraction can be performed on each image block, wherein the extracted features can be features such as traditional PCA (principal component analysis), LDA (linear discriminant analysis), harr features, textures and the like. Then, the extracted image features are classified by classifiers such as KNN, SVM, random forest and the like, and whether the pixel points corresponding to the current image block belong to a tissue area or an instrument area is determined, so that the purpose of determining the tissue area or the instrument area is achieved. When the machine learning model is trained, sample image data marked with the instrument region and the instrument type information can be obtained, a training data set is constructed according to the sample image data marked with the instrument region and the instrument type information, and the machine learning model is trained on the basis of the training data set.
The neural network model may be a bounding box based deep learning model. The neural network model requires a large amount of sample image data and corresponding data calibration results for training. Exemplarily, sample image data labeled with an instrument region and instrument type information is obtained, a training data set is constructed according to the sample image data labeled with the instrument region and the instrument type information, a neural network is constructed by stacking a convolutional layer and a full-link layer, and learning of features and regression of parameters are performed on the basis of the constructed training data set, so that the neural network is trained. The trained neural network can be used for determining an instrument region, specifically, sample image data is sent into the neural network constructed in advance, a loss function of the network is optimized to carry out training until the network converges, and the neural network can learn how to identify the instrument region and instrument category information thereof from the endoscope image in the training process.
Exemplarily, the architecture of the neural network mainly comprises a convolutional layer, an active layer, a pooling layer and an up-sampling or anti-convolutional layer, relevant features are extracted from an image through the convolutional layer of a shallow layer, then, the feature map is up-sampled and mapped back to the size of an original image through the anti-convolutional layer, an output image with the same size as that of the input image is obtained, and the output image directly segments an instrument region and class information thereof. Common neural networks include FCN, U-Net, mask R-CNN, and the like.
In some embodiments, the endoscopic image may also be segmented using conventional image segmentation algorithms. The traditional image segmentation algorithm is mainly used for dividing different regions according to the characteristics of the image such as gray scale, color and the like, so that the internal properties of the same region are similar, and the properties of different regions are different. Exemplary conventional image segmentation methods may include threshold-based segmentation methods, region-based image segmentation methods, graph theory-based image segmentation methods, and the like.
In step S230, a region of interest in the endoscopic image is determined based on the tissue region and the instrument region, the region of interest including a portion of the tissue region and/or a portion of the instrument region. The region of interest may be an operation region for performing an operation on tissue using an instrument, or a region of interest when a user views an endoscopic image. The region outside the region of interest in the endoscopic image is defined as a region of non-interest.
In one embodiment, referring to fig. 3, the instrument region and the instrument category information in the endoscopic image may be determined according to a machine learning model or a neural network model, the target instrument region represented by the target instrument category information may be determined according to the instrument category information, the tip of the execution end of the target instrument region may be determined, and the region of interest may be determined centering on the tip of the execution end. For example, a circular region of interest may be determined with the end of the executing end as the center and the radius R, but it is understood that the shape of the region of interest is not limited to a circle, and may be other shapes.
Illustratively, the executing end is the end of the surgical instrument that performs the surgical operation on the tissue. For example, when the target instrument type information is a scalpel, the execution end is one end of the blade tip. When the target instrument category information is a needle-shaped instrument such as a biopsy needle, the execution end is one end of the needle tip. The peripheral area of the end part of the execution end is the tissue area to be operated, so that the area of interest is determined by taking the end part of the execution end as the center, the tissue area to be operated has high image quality, and the smooth operation is ensured.
In some embodiments, the metallic medical device may reflect light to make the brightness of the device region higher, and in order to avoid the high brightness of the device region from affecting the tissue region, the outline of the region of interest may be determined with the end of the performing tip as the center, the device region is removed inside the outline, and the remaining tissue region is used as the region of interest. In this embodiment, the region of interest includes a tissue region and not an instrument region, and the tissue region includes a tissue region of interest and a non-tissue region of interest.
In other embodiments, category information of the tissue region output by the machine learning model or the neural network model may also be acquired, the target tissue region may be selected according to the category information of the tissue region, and the region of interest may be determined according to the target tissue region. For example, the target tissue region may be a tissue region in an endoscopic image that requires a surgical procedure.
After the region of interest is determined, in step S240, control parameters and/or image parameters of the endoscopic camera system are adjusted according to the region of interest to at least enhance the image quality of the region of interest. The control parameters of the endoscope camera system may include control parameters used when image signals are collected by the camera, and the image parameters of the endoscope camera system may be parameters used when image signals are processed.
In one embodiment, the image quality of the region of interest may be enhanced by preferentially ensuring the brightness of the region of interest. Specifically, different photometric weights may be set for the region of interest and the region of no interest, where the photometric weight of the region of interest is higher than the photometric weight of the region of no interest. And determining the photometric value of the endoscope image according to the photometric weight of the interested region and the photometric weight of the non-interested region, and adjusting the exposure parameter of the endoscope imaging according to the photometric value and the target photometric value to enable the photometric value of the image to reach the target photometric value. In some embodiments, the metering weight of the region of interest may be 1, and the metering weight of the non-region of interest may be 0, i.e. only the metering value of the region of interest may be considered.
Referring to fig. 3 and 4, in some embodiments, different photometric weights may also be set for tissue and instrument regions in non-regions of interest. Wherein the light metering weight of the region of interest is W o The photometric weight of the instrument region in the region of non-interest is W i The photometric weight of the tissue region in the region of non-interest is W b . Wherein, W o Is greater than W i And W b ,W i And W b Are different in size. According to W o 、W i And W b The brightness of all pixel points in the endoscope image is weighted and summed to obtain the photometric value of the endoscope image, and in the process of adjusting the exposure parameter of the endoscope imaging according to the current photometric value and the target photometric value of the endoscope image, because the photometric weight of the interested area in the photometric value is larger, the brightness of the interested area is preferentially ensured to reach the target brightness.
Illustratively, the exposure parameters may include one or more of exposure time, exposure gain, and light source brightness. Specifically, exposure is a process of light sensing of an image sensor in a camera, in the exposure process, the image sensor collects photons and converts the photons into electric charges, and the electric charges are output after the exposure is finished to generate an electric signal. Controlling the exposure time enables controlling the total luminous flux of the camera. The exposure gain is controlled to control the photosensitive sensitivity of the image sensor, and the higher the gain is, the more sensitive the photosensitive is. The brightness of the light source is controlled, namely the intensity of visible light or special light irradiated to the target tissue is controlled and adjusted. The above three modes can adjust the overall brightness of the endoscope image, so that the photometric value of the endoscope image reaches the target brightness.
According to the embodiment of the invention, the luminance of the region of interest can be preferentially ensured to meet the expectation by increasing the photometric weight of the region of interest. For example, when tissue overexposure occurs and the region of interest does not intersect with the overexposed tissue, i.e., the region of interest is far away from the overexposed tissue or the tissue of the sensitive region is not the overexposed tissue, if the global image quality is considered, the brightness of the entire image is reduced by ensuring that no overexposed tissue exists in the image, which will cause the brightness of the region of interest to be reduced. The embodiment of the invention can preferentially adjust the brightness of the region of interest by increasing the photometric weight of the region of interest, ensures the image quality of the region of interest, ensures the uniform brightness and clear visual field of the region of interest, and does not influence the operation of a user even if the brightness of overexposed tissues can be maintained in the exposure adjustment process.
When the interested area moves to the area containing the overexposure area, the photometric weight occupied by the overexposure area is increased, so that the overall photometric value of the endoscope image is obviously increased and exceeds the target photometric value, and at the moment, the brightness of the interested area is reduced by adjusting the exposure parameters of the endoscope imaging, so that the brightness of the original overexposed tissue is uniform, and the image quality of an operation area is ensured; even if the brightness of the non-interested area is too dark, the operation of the user on the interested area is not influenced.
For example, the region of interest may include a plurality of sub-regions, and different sub-regions may have different photometric weights, so as to further subdivide the importance of different sub-regions in the region of interest. For example, where the region of interest includes both a tissue region and an instrument region, the tissue region and the instrument region may have different photometric weights. For another example, where the region of interest includes multiple different types of tissue regions or different types of instrument regions, the different types of tissue regions or different types of instrument regions may have different photometric weights.
Similarly, the non-region of interest may also include a plurality of sub-regions, and different sub-regions may have different metering weights, so as to further subdivide the importance of different sub-regions in the non-region of interest. For example, where the non-region of interest includes both tissue and instrument regions, the tissue and instrument regions may have different photometric weights. For another example, where the region of non-interest includes multiple different types of tissue regions or different types of instrument regions, the different types of tissue regions or different types of instrument regions may have different photometric weights.
In another embodiment, adjusting control parameters and/or image parameters of an endoscopic camera system in accordance with a region of interest to at least enhance image quality of the region of interest comprises: and performing focusing adjustment of the endoscope imaging according to the region of interest so as to focus on the region of interest. Specifically, an optical element of a camera in the endoscope camera system can be adjusted according to the position of the region of interest, so that the camera focuses on the region of interest, and the definition of the region of interest is preferentially ensured.
Illustratively, performing focus adjustment of endoscopic imaging according to a region of interest to focus on the region of interest includes: setting different focusing weights for the interested region and the non-interested region, wherein the focusing weight of the interested region is higher than that of the non-interested region; determining a focusing value of the endoscope image according to the focusing weight of the interested region and the focusing weight of the non-interested region; and controlling the endoscope camera system to perform focusing adjustment according to the focusing value so as to achieve the target definition. Illustratively, the sharpness values of the regions of interest and the regions of no interest may be weighted according to the focusing weights of the regions of interest and the regions of no interest to obtain a total sharpness, and the total sharpness reaches the target sharpness by adjusting an optical element of the camera. The calculation method of the definition value can be reasonably selected according to the actual application requirement.
For example, the region of interest may comprise a plurality of sub-regions, and different sub-regions may have different focusing weights, thereby further subdividing the importance of different sub-regions in the region of interest. For example, where the region of interest includes both a tissue region and an instrument region, the tissue region and the instrument region may have different focus weights. For another example, where the region of interest includes a plurality of different types of tissue regions or different types of instrument regions, the different types of tissue regions or different types of instrument regions may have different focusing weights.
In some embodiments, in addition to performing exposure and focus according to the region of interest, other image processing, such as zooming in, defogging, exsanguination, red removal (i.e., reducing red saturation), etc., may be performed on the region of interest to further improve the image quality of the region of interest.
Based on the above description, the imaging method 200 for an endoscopic imaging system according to the embodiment of the present invention adjusts the control parameters and/or image parameters of the endoscopic imaging system according to the region of interest in the endoscopic image, and can preferentially ensure that the region of interest has high image quality.
Referring to fig. 5, another aspect of the present invention provides an imaging method 500 for an endoscopic camera system, including the following steps:
in step S510, performing endoscopic imaging through an endoscope to obtain a first endoscopic image;
in step S520, determining a region of interest in the first endoscopic image based on the trained machine learning model or neural network model;
in step S530, control parameters and/or image parameters of the endoscopic camera system are adjusted according to the region of interest to enhance at least image quality of the region of interest.
In one embodiment, determining a region of interest in the first endoscopic image based on a trained machine learning model or neural network model comprises: recognizing a non-tissue region in the first endoscope image based on the trained machine learning model or the neural network model, obtaining the class information of the non-tissue region, and determining the region of interest according to the class information of the non-tissue region. The machine learning model or the neural network model is obtained by training endoscope image data samples marked with non-tissue areas and category information thereof. The structure and training method of the machine learning model or the neural network model can refer to the above, and are not described herein again.
Illustratively, the category information of the non-tissue region includes instrument category information and auxiliary item category information, the instrument may include a surgical instrument such as a scalpel, and the auxiliary item may include a surgical auxiliary item such as gauze. The target non-tissue region includes a target instrument region with target instrument category information, that is, the target instrument region in the first endoscopic image may be identified based on the trained machine learning model or neural network model, and the region of interest is determined according to the target instrument region.
Further, a target non-tissue region having target category information is determined, and a region of interest is determined around the target non-tissue region. The region of interest may include a tissue region within a predetermined range around the target non-tissue region. Illustratively, determining a region of interest around a target non-tissue region includes: determining the outline of the region of interest according to the target non-tissue region, and determining the region positioned in the outline as the region of interest. For example, the tip of the executing end of the target instrument region may be determined, the contour of the region of interest may be determined centering on the tip of the executing end, and the region located inside the contour may be determined as the region of interest. The outline of the region of interest may be circular or any other shape.
In other embodiments, determining the region of interest in the first endoscopic image based on the trained machine learning model or neural network model comprises: identifying a tissue region in the first endoscope image based on the trained machine learning model or the trained neural network model, obtaining the category information of the tissue region, and determining the region of interest according to the category information of the tissue region. The machine learning model or the neural network model is obtained by training endoscope image data samples labeled with tissue regions and category information thereof.
The type information of the tissue region may be type information of an organ to which the tissue region belongs. In this embodiment, the target tissue region with the target category information may be determined, the region of interest may be determined around the target tissue region, or the target tissue region with the target category information may be directly determined as the region of interest. In one embodiment, a contour of the region of interest may be determined from the target tissue region, and a region located inside the contour may be determined as the region of interest. The target tissue region may be a region of a tissue belonging to a target organ, and the target organ may be an organ that currently requires surgery.
In some embodiments, the trained machine learning model or neural network model may also directly output the location information of the region of interest in the first endoscopic image, and in this embodiment, the machine learning model or neural network model is trained based on endoscopic image sample data labeled with the region of interest.
In step S530, different metering weights may be set for the region of interest and the region of non-interest, where the metering weight of the region of interest is higher than the metering weight of the region of non-interest; the region of non-interest is a region outside the region of interest in the endoscopic image. And determining a photometric value of the first endoscope image according to the photometric weight of the interested region and the photometric weight of the non-interested region, and adjusting an exposure parameter of the endoscope imaging according to the photometric value of the first endoscope image and the target photometric value to enable the photometric value of the image to reach the target photometric value. Since the metering weight of the region of interest is higher, the exposure parameter adjustment can preferentially ensure the image brightness of the region of interest.
Alternatively, focus adjustment of the endoscopic imaging may be performed according to the region of interest to focus on the region of interest. For example, different focusing weights may be set for the region of interest and the region of non-interest, wherein the focusing weight of the region of interest is higher than the focusing weight of the region of non-interest; determining a focusing value of the endoscope image according to the focusing weight of the interested region and the focusing weight of the non-interested region; and controlling the endoscope camera system to perform focusing adjustment according to the focusing value. Since the focusing weight of the region of interest is higher, the focusing adjustment can preferentially ensure the definition of the region of interest.
After the control parameters and/or the image parameters of the endoscope camera system are adjusted based on the region of interest in the first endoscope image, endoscope imaging can be performed according to the adjusted control parameters and/or image parameters to obtain a second endoscope image, so that the region of interest in the second endoscope image has high image quality.
In addition, the imaging method 500 for the endoscopic camera system has many contents that are the same as or similar to the above-described imaging method 200 for the endoscopic camera system, and specific reference may be made to the above, which is not described herein again. The imaging method 500 for an endoscopic camera system adjusts control parameters and/or image parameters of the endoscopic camera system according to the region of interest, enabling priority to be given to ensuring image quality of the region of interest.
Referring to fig. 1 again, the embodiment of the present invention further provides an endoscope imaging system 100, which includes a light source 110, a light guide bundle 120, an endoscope 130, a cable 140, an imaging host 150, and a camera 160, wherein the light source 110 is connected to the endoscope 130 through the light guide bundle 120, one end of the camera 160 is connected to the endoscope 130, and the other end of the camera 160 is connected to the imaging host 150 through the cable 140. The camera host can be used to execute the imaging method 100 for the endoscope camera system or the imaging method 500 for the endoscope camera system according to the embodiment of the present invention, and the specific structure of the endoscope camera system 100, and the specific steps of the imaging method 200 for the endoscope camera system and the imaging method 500 for the endoscope camera system have been described above and are not described herein again. The endoscopic imaging system 100 according to the embodiment of the present invention adjusts the control parameters and/or the image parameters of the endoscopic imaging system according to the region of interest, and can preferentially ensure the image quality of the region of interest.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website, or provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. An imaging method for an endoscopic camera system, comprising:
performing endoscopic imaging through an endoscope to obtain an endoscopic image;
determining a tissue region and an instrument region in the endoscopic image;
determining a region of interest in the endoscopic image from the tissue region and the instrument region; the region of interest comprises a portion of the tissue region and/or a portion of the instrument region;
and adjusting control parameters and/or image parameters of the endoscope camera system according to the region of interest so as to at least enhance the image quality of the region of interest.
2. The method of claim 1, wherein adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises:
setting different metering weights for the interested region and the non-interested region, wherein the metering weight of the interested region is higher than that of the non-interested region; the non-interested region is a region outside the interested region in the endoscope image;
determining a photometric value of the endoscopic image according to the photometric weight of the region of interest and the photometric weight of the region of no interest;
and adjusting the exposure parameters of the endoscope imaging according to the photometric value and the target photometric value.
3. The method of claim 1, wherein adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises: and carrying out focusing adjustment of the endoscope imaging according to the region of interest so as to focus on the region of interest.
4. The method of claim 3, wherein performing focus adjustment of the endoscopic imaging according to the region of interest to focus on the region of interest comprises:
setting different focusing weights for the interested region and the non-interested region, wherein the focusing weight of the interested region is higher than that of the non-interested region; the region of non-interest is a region outside the region of interest in the endoscope image;
determining a focusing value of the endoscope image according to the focusing weight of the interested region and the focusing weight of the non-interested region;
and controlling the endoscope camera system to carry out focusing adjustment according to the focusing value.
5. The method of any of claims 1-4, wherein determining the tissue region and the instrument region in the endoscopic image comprises:
determining tissue and instrument regions in the endoscopic image based on a trained machine learning model or neural network model.
6. The method of claim 5, in which a training process of the machine learning model and the neural network model comprises at least:
acquiring sample image data, wherein instrument region and instrument type information are marked in the sample image data;
constructing a training data set according to the sample image data marked with the instrument region and the instrument category information;
and training based on the training data set to obtain the machine learning model or the neural network model.
7. The method of claim 6, wherein determining a region of interest in the endoscopic image from the tissue region and the instrument region comprises:
according to the machine learning model or the neural network model, determining an instrument area and instrument type information in the endoscope image, determining the end part of an execution end of a target instrument area according to the instrument type information, and determining the region of interest by taking the end part of the execution end as a center.
8. An imaging method for an endoscopic camera system, comprising:
performing endoscopic imaging through an endoscope to obtain a first endoscopic image;
determining a region of interest in the first endoscopic image based on a trained machine learning model or neural network model;
adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest.
9. The method of claim 8, wherein adjusting control parameters and/or image parameters of the endoscopic camera system in accordance with the region of interest to at least enhance image quality of the region of interest comprises:
setting different metering weights for the interested region and the non-interested region, wherein the metering weight of the interested region is higher than that of the non-interested region; the region of non-interest is a region outside the region of interest in the endoscope image;
determining a photometric value of the first endoscopic image according to the photometric weight of the region of interest and the photometric weight of the region of no interest;
adjusting exposure parameters of the endoscope imaging according to the photometric value and the target photometric value;
or, adjusting control parameters and/or image parameters of the endoscopic camera system according to the region of interest to at least enhance image quality of the region of interest, comprising: and carrying out focusing adjustment of the endoscope imaging according to the region of interest so as to focus on the region of interest.
10. The method of claim 8 or 9, wherein the determining a region of interest in the first endoscopic image based on the trained machine learning model or neural network model comprises:
identifying a non-tissue area in the first endoscope image based on a trained machine learning model or a neural network model, and obtaining class information of the non-tissue area, wherein the machine learning model or the neural network model is obtained by training an endoscope image data sample labeled with the class information of the non-tissue area;
determining the region of interest according to the category information of the non-tissue region;
or, the determining the region of interest in the first endoscopic image based on the trained machine learning model or neural network model includes:
identifying a tissue area in the first endoscope image based on a trained machine learning model or a neural network model, and obtaining category information of the tissue area, wherein the machine learning model or the neural network model is obtained by training an endoscope image data sample labeled with the category information of the tissue area;
and determining the region of interest according to the category information of the tissue region.
11. The method of claim 10, wherein determining a region of interest based on the classification information of the non-tissue region comprises:
determining a target non-tissue region having target category information;
determining the region of interest around the target non-tissue region;
or, determining the region of interest according to the category information of the tissue region, including:
determining a target tissue region having target category information;
determining the region of interest around the target tissue region.
12. The method of claim 11, wherein determining the region of interest around the target non-tissue region comprises:
determining a contour of the region of interest from the target non-tissue region;
determining a region located inside the contour as the region of interest;
alternatively, determining the region of interest around the target tissue region comprises:
determining a contour of the region of interest from the target tissue region;
determining an area located inside the contour as the region of interest.
13. The method of claim 12, wherein the category information of the non-tissue region includes instrument category information and accessory item category information, and the target non-tissue region includes a target instrument region having target instrument category information.
14. The method of claim 8, further comprising:
and adjusting the control parameters and/or image parameters of the endoscope camera system according to the region of interest to obtain a second endoscope image.
15. An endoscope camera system, comprising a light source, a light guide bundle, an endoscope, a cable, a camera host and a camera head, wherein the light source is connected with the endoscope through the light guide bundle, one end of the camera head is connected with the endoscope, the other end of the camera head is connected with the camera host through the cable, and the camera host is used for executing the method of any one of claims 1-14.
CN202210689038.9A 2022-06-16 2022-06-16 Imaging method for endoscopic camera system and endoscopic camera system Pending CN115153397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210689038.9A CN115153397A (en) 2022-06-16 2022-06-16 Imaging method for endoscopic camera system and endoscopic camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210689038.9A CN115153397A (en) 2022-06-16 2022-06-16 Imaging method for endoscopic camera system and endoscopic camera system

Publications (1)

Publication Number Publication Date
CN115153397A true CN115153397A (en) 2022-10-11

Family

ID=83485985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210689038.9A Pending CN115153397A (en) 2022-06-16 2022-06-16 Imaging method for endoscopic camera system and endoscopic camera system

Country Status (1)

Country Link
CN (1) CN115153397A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908349A (en) * 2022-12-01 2023-04-04 北京锐影医疗技术有限公司 Method and equipment for automatically adjusting endoscope parameters based on tissue identification
CN117061841A (en) * 2023-06-12 2023-11-14 深圳市博盛医疗科技有限公司 Dual-wafer endoscope imaging method and imaging device
CN117456000A (en) * 2023-12-20 2024-01-26 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908349A (en) * 2022-12-01 2023-04-04 北京锐影医疗技术有限公司 Method and equipment for automatically adjusting endoscope parameters based on tissue identification
CN115908349B (en) * 2022-12-01 2024-01-30 北京锐影医疗技术有限公司 Automatic endoscope parameter adjusting method and device based on tissue identification
CN117061841A (en) * 2023-06-12 2023-11-14 深圳市博盛医疗科技有限公司 Dual-wafer endoscope imaging method and imaging device
CN117456000A (en) * 2023-12-20 2024-01-26 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment
CN117456000B (en) * 2023-12-20 2024-03-29 杭州海康慧影科技有限公司 Focusing method and device of endoscope, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN115153397A (en) Imaging method for endoscopic camera system and endoscopic camera system
EP3776458B1 (en) Augmented reality microscope for pathology with overlay of quantitative biomarker data
US20200364862A1 (en) Wound imaging and analysis
WO2023103467A1 (en) Image processing method, apparatus and device
JP6967602B2 (en) Inspection support device, endoscope device, operation method of endoscope device, and inspection support program
JP2016530917A (en) System and method for optical detection of skin diseases
JP7308258B2 (en) Medical imaging device and method of operating medical imaging device
WO2017150194A1 (en) Image processing device, image processing method, and program
JP6342810B2 (en) Image processing
US20230368379A1 (en) Image processing method and apparatus
CN106793939A (en) For the method and system of the diagnostic mapping of bladder
JPWO2020008834A1 (en) Image processing equipment, methods and endoscopic systems
WO2015199067A1 (en) Image analysis device, imaging system, surgery assistance system, image analysis method, and image analysis program
US10921252B2 (en) Image processing apparatus and method of operating image processing apparatus
KR20200026135A (en) The method for measuring microcirculation in cochlea and the apparatus thereof
WO2019171909A1 (en) Image processing method, image processing device, and program
CN113496475B (en) Imaging method and device in endoscope image pickup system and computer equipment
CN113693724B (en) Irradiation method, device and storage medium suitable for fluorescence image navigation operation
WO2022059668A1 (en) Medical image processing device and method for operating medical image processing device, and program for medical image processing device
CN115444355A (en) Endoscope lesion size information determining method, electronic device and storage medium
CN114449146A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
Tran Van et al. Application of multispectral imaging in the human tympanic membrane
US12029386B2 (en) Endoscopy service support device, endoscopy service support system, and method of operating endoscopy service support device
CN112270662A (en) Image processing method, device, equipment and storage medium
US20220378276A1 (en) Endoscopy service support device, endoscopy service support system, and method of operating endoscopy service support device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Building 5, No. 828 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430206

Applicant after: Wuhan Mindray Biomedical Technology Co.,Ltd.

Applicant after: SHENZHEN MINDRAY BIO-MEDICAL ELECTRONICS Co.,Ltd.

Address before: 430223 floor 3, building B1, zone B, high tech medical device Park, No. 818, Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province (Wuhan area of free trade zone)

Applicant before: Wuhan Mairui Medical Technology Research Institute Co.,Ltd.

Country or region before: China

Applicant before: SHENZHEN MINDRAY BIO-MEDICAL ELECTRONICS Co.,Ltd.