CN112386213A - Endoscopic imaging method, device, equipment and storage medium - Google Patents
Endoscopic imaging method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112386213A CN112386213A CN202110078260.0A CN202110078260A CN112386213A CN 112386213 A CN112386213 A CN 112386213A CN 202110078260 A CN202110078260 A CN 202110078260A CN 112386213 A CN112386213 A CN 112386213A
- Authority
- CN
- China
- Prior art keywords
- image
- light
- white
- imaging probe
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0638—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements providing two or more wavelengths
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0661—Endoscope light sources
- A61B1/0684—Endoscope light sources using light emitting diodes [LED]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/555—Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Endoscopes (AREA)
Abstract
The application discloses an endoscopic imaging method, which comprises the following steps: identifying a target part where an imaging probe is currently located; controlling an imaging probe to acquire a white-light-like image of a target part in a white-light-like illumination mode, wherein the color tone of the white-light-like image is similar to that of the white-light image of the target part, and in the same image area, the contrast ratio of background information and detail information in the white-light-like image under a blue channel is greater than that of the background information and the detail information in the white-light image under the blue channel. By applying the technical scheme provided by the application, when endoscopic imaging is carried out, the imaging tone similar to a white light image can be presented, the user can conveniently recognize the imaging tone, meanwhile, the contrast of the focus area and the normal mucous membrane of each part can be effectively enhanced, the omission ratio is reduced, and the accuracy of the inspection result is improved. The application also discloses an endoscopic imaging device, equipment and a computer storage medium, which have corresponding technical effects.
Description
Technical Field
The present application relates to the field of endoscope technologies, and in particular, to an endoscopic imaging method, apparatus, device, and storage medium.
Background
With the progress of medical technology, various examination methods are gradually increased, examination results provide important bases for subsequent diagnosis, and the accuracy of the examination results is more important.
In endoscopy, since the white light illumination mode is closer to the visual effect of human eyes, doctors usually perform a full-course examination on the digestive tract by using the white light illumination mode, and after suspicious lesions are found, the doctor switches to special light illumination modes such as NBI (Narrow Band Imaging), BLI (Blue Laser Imaging), LCI (Linked Color Imaging) and the like for further fine examination.
However, when endoscopic imaging is performed in the white light illumination mode, the contrast between the lesion area and the normal mucous membrane is low, missed diagnosis is easy to occur, and once a suspected lesion is missed in the white light illumination mode, the suspected lesion cannot be subsequently checked by switching to the special light illumination mode, so that a more accurate inspection result cannot be finally obtained.
Disclosure of Invention
The application aims to provide an endoscopic imaging method, an endoscopic imaging device, endoscopic imaging equipment and a storage medium, so that when endoscopic imaging is carried out, imaging tones similar to white light images can be presented, meanwhile, the contrast of a focus area and a normal mucous membrane is effectively enhanced, the omission ratio is reduced, and the accuracy of an inspection result is improved.
In order to solve the technical problem, the application provides the following technical scheme:
an endoscopic imaging method applied to an endoscopic imaging device, the endoscopic imaging device comprising an imaging probe that can be placed in a body, the endoscopic imaging method comprising:
identifying a target part where the imaging probe is currently located;
controlling the imaging probe to acquire a white light-like image of the target part in a white light-like illumination mode;
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image collected by the imaging probe at the target part in the white-light illumination mode is greater than or equal to a set similarity threshold;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under a blue channel is greater than the contrast of the background information and the detail information in the white-light-like image under the blue channel.
In one embodiment of the present application, the set similarity threshold is greater than or equal to 80%.
In one embodiment of the present application, a difference value between R/G of background information of the white-light-like image and R/G of background information of the white-light-like image is within a first preset deviation range;
the difference value between the B/G of the background information of the white-light-like image and the B/G of the background information of the white-light-like image is within a second preset deviation range;
the R/G is the ratio of the red channel to the green channel; and the B/G is the ratio of the blue channel to the green channel.
In one embodiment of the present application, the first preset deviation range is determined according to 10% of R/G of the white light image, and the second preset deviation range is determined according to 10% of B/G of the white light image.
In one embodiment of the present application, the controlling the imaging probe to acquire the white-light-like image of the target portion in the white-light-like illumination mode includes:
determining a first spectrum corresponding to the target site in a white-like light illumination mode;
and controlling the imaging probe to acquire a white light-like image of the target part by using the first spectrum illumination.
In one embodiment of the present application, the determining a first spectrum corresponding to the target site in the white-like illumination mode includes:
and inquiring and determining a first spectrum corresponding to the target part in the white light-like illumination mode in the part spectrum corresponding table.
In an embodiment of the present application, the identifying a target site where the imaging probe is currently located includes:
acquiring an image currently acquired by the imaging probe;
and identifying a target part where the imaging probe is currently located based on the currently acquired image.
In a specific embodiment of the present application, the identifying, based on the currently acquired image, a target site where the imaging probe is currently located includes:
inputting the currently acquired image into a part recognition network obtained by pre-training;
and determining the current target position of the imaging probe according to the output result of the position identification network.
In an embodiment of the present application, before the inputting the currently acquired image into the pre-trained part recognition network, the method further includes:
and preprocessing the currently acquired image according to a set processing mode, wherein the processing mode comprises at least one mode of demosaicing, black level correction, denoising, enhancement and compression.
In a specific embodiment of the present application, the determining, according to the output result of the part identification network, a target part where the imaging probe is currently located includes:
and if the output results of the part identification network for the continuous N frames of images in the currently acquired image are the same, determining the part corresponding to the output result as the current target part of the imaging probe, wherein N is greater than or equal to 3.
In one embodiment of the present application, the determining the target site where the imaging probe is currently located according to the output result of the part identification network includes:
obtaining an output result for each recognition sub-network;
and determining the part corresponding to the output result with the largest proportion as the current target part of the imaging probe.
In one embodiment of the present application, the endoscopic imaging method further comprises:
if suspicious lesions exist in the white-light-like image, switching to other special light illumination modes;
determining a second spectrum corresponding to the target site in the other special light illumination modes;
and controlling the imaging probe to acquire a special light image of the target part by using the second spectrum illumination.
An endoscopic imaging apparatus for operation in an endoscopic imaging device including an imaging probe that is implantable in a body, the endoscopic imaging apparatus comprising:
the part identification module is used for identifying a target part where the imaging probe is located currently;
the image acquisition module is used for controlling the imaging probe to acquire a white light-like image of the target part in a white light-like illumination mode;
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image collected by the imaging probe at the target part in the white-light illumination mode is greater than or equal to a set similarity threshold;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under a blue channel is greater than the contrast of the background information and the detail information in the white-light-like image under the blue channel.
An endoscopic imaging device comprising:
an imaging probe that is implantable in a body to acquire an image;
an illumination light source for providing illumination light to the imaging probe;
a memory for storing a computer program;
a processor communicatively connected to the imaging probe, the illumination source and the memory, respectively, for implementing the endoscopic imaging method as described above when executing the computer program.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an endoscopic imaging method as described above.
By applying the technical scheme provided by the embodiment of the application, after the target part where the imaging probe placed in the body is located at present is identified, the imaging probe is controlled to acquire the white-light-like image of the target part in the white-light-like illumination mode. The color tone of the white-light-like image is similar to that of the white-light image, and in the same image area, the contrast ratio of the background information and the detail information in the white-light-like image under a blue channel is greater than that of the background information and the detail information in the white-light image under the blue channel. Therefore, when endoscopic imaging is carried out, the imaging tone similar to a white light image can be presented, the observation habit of a doctor is adapted, simultaneously, the contrast ratio of mucosa and blood vessels can be improved, the display of congestion areas and tiny capillary vessels is enhanced, and the focus area is more prominent, so that the contrast ratio of the focus area and the normal mucosa of each part is effectively enhanced, the omission ratio is reduced, and the accuracy of an inspection result is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a fixed spectrum used in the related art;
fig. 2 is a flow chart of an endoscopic imaging method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an adaptive endoscopic imaging system according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a comparison between a white light image (left) and a white light-like image (right) of a human mouth according to an embodiment of the present application;
FIG. 5 is a schematic comparison of a white light image (left) and a white light-like image (right) of a pig stomach according to an embodiment of the present disclosure;
fig. 6 is a schematic structural view of an endoscopic imaging apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an endoscopic imaging apparatus according to an embodiment of the present application.
Detailed Description
The core of the application is to provide an endoscopic imaging method, which can be applied to an endoscopic imaging device, wherein the endoscopic imaging device comprises an imaging probe which can be placed in a body. The method can be applied to any endoscopic scene, such as endoscopic examination of the digestive tract and the like. For convenience of description, the embodiments of the present application are described mainly in the context of endoscopy of the digestive tract.
In the related art, when endoscopic imaging is performed in a white light illumination mode, the contrast between a lesion area and a normal mucous membrane is low, and missed diagnosis is easy to occur. In the special light illumination mode, a fixed spectrum illumination imaging is adopted, and the difference of the optical characteristics of tissues of each anatomical part of the digestive tract is not considered, so that the contrast difference between the focus area of different parts and the normal mucous membrane is larger. If the spectrum illumination shown in fig. 1 is used, in the obtained esophagus image and antrum image, the antrum image is in alternate red and yellow colors, the focus is effectively highlighted, while the color of the esophagus image is blue, and blood vessels on the surface of the mucous membrane are difficult to distinguish. That is, also using the spectral illumination shown in fig. 1, the contrast difference between the lesion area and the normal mucosa at different sites is large, which will cause a missed diagnosis situation, and thus the examination result is inaccurate.
According to the embodiment of the application, the target part where the imaging probe placed in the body is located at present is identified, and then the imaging probe is controlled to acquire the white-light-like image of the target part in the white-light-like illumination mode. The tone of the white-light-like image is similar to that of the white-light image, so that the identification by a user is facilitated, and in the same image area, the contrast ratio of the background information and the detail information in the white-light-like image under a blue channel is greater than that of the background information and the detail information in the white-light image under the blue channel. Therefore, when endoscopic imaging is carried out, imaging tones similar to white light images can be presented, meanwhile, the contrast of the focus area and the normal mucous membrane of each part can be effectively enhanced, the omission ratio is reduced, and the accuracy of the inspection result is improved.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, there is shown a flowchart for implementing an endoscopic imaging method according to an embodiment of the present application, where the method may include the following steps:
s210: and identifying the target part where the imaging probe is currently located.
In the embodiment of the present application, the imaging probe may specifically be an image sensor provided at the head end portion of the endoscope body. The imaging probe may be extended into the body to acquire images. Such as by extending into the lumen of the alimentary tract, and imaging the alimentary tract tissue.
When endoscopic imaging is needed, a target part where an imaging probe placed in a body is located can be identified.
Since the imaging probe is usually arranged at the head end of the endoscope body, the position of the head end is the position of the imaging probe, so that, in some embodiments, the scale mark on the endoscope body can be read in real time through the sensor to obtain the length of the endoscope body entering the body, and then the target position where the imaging probe is currently located is determined based on the pre-obtained length and the comparison table of the position of the internal part.
S220: and controlling the imaging probe to acquire a white light-like image of the target part in a white light-like illumination mode.
In the embodiment of the application, after the target part where the imaging probe is located at present is identified, the imaging probe is controlled to acquire the white-light-like image of the target part by applying the white-light-like illumination mode.
The white light-like illumination mode is an illumination mode similar to the white light illumination mode, and particularly refers to illumination by using a white light-like spectrum.
Wherein, the white-light-like image of the target part acquired by the imaging probe in the white-light-like illumination mode has the following characteristics compared with the white-light image of the target part acquired by the imaging probe in the white-light-like illumination mode (i.e. the white-light-like image is compared with the white-light image acquired at the same position):
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light-like image is greater than or equal to a set similarity threshold, and the contrast between the background information and the detail information in the white-light-like image under a blue channel is greater than the contrast between the background information and the detail information in the white-light image under the blue channel in the same image area.
The set similarity threshold can be set and adjusted according to actual conditions. Alternatively, the set similarity threshold may be greater than or equal to 80%.
Specifically, the specific implementation of determining the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image may include: respectively converting the white-light-like image and the white-light image into image spaces capable of embodying hue values, such as HSV, HSI and other image spaces, and then respectively obtaining hue values corresponding to background information in the image spaces; and then the similarity of the tone values of the background information in the two images is compared.
Or, in another embodiment, R/G and B/G of the background information in the white-light-like image and the white-light image may be extracted respectively, and then the difference of R/G and the difference of B/G of the background information in the two images may be compared respectively; if the difference value between the R/G of the background detail information of the white-light-like image and the R/G of the background detail information of the white-light image is within a first preset deviation range, and the difference value between the B/G of the background detail information of the white-light-like image and the B/G of the background detail information of the white-light image is within a second preset deviation range, the similarity between the color tone of the background detail information of the white-light-like image and the color tone of the background detail information of the white-light image can be considered to be larger than a set similarity threshold value.
Specifically, the first preset deviation range may be determined according to 10% of the R/G of the white light image, for example, the absolute value of the difference between the R/G of the white light-like image and the R/G of the white light image at the same position is less than or equal to 10% of the R/G of the white light image thereof. Similarly, the second predetermined deviation range may be determined according to 10% of the B/G of the white light image, for example, the absolute value of the difference between the B/G of the white light-like image and the B/G of the white light image at the same position is less than or equal to 10% of the B/G of the white light image. Of course, other suitable first predetermined deviation ranges and/or second predetermined deviation ranges may also be obtained from the experimental data.
Wherein, it can be understood that for an image, R/G is the ratio of the red channel to the green channel, and B/G is the ratio of the blue channel to the green channel. Specifically, R/G, B/G may be the ratio of the corresponding color channels within the effective imaging area of the image. The effective imaging region is a region in the image that includes tissue information.
In this embodiment, since the background information in the image collected by the endoscope generally corresponds to the mucosa, the detailed information generally corresponds to the blood vessels, and the consistency of the color tone of the mucosa in the image corresponding to the same portion is better, and the blood vessels have different color tones due to the depth and size of the blood vessels in the mucosa, the similarity between the color tone of the background information in the white-like image and the color tone of the background information in the white-like image is greater than or equal to the set similarity threshold, so that the color tone of the white-like image can be close to the color tone of the white-like image, which is convenient for the user to view and reduces the learning cost.
In addition, in the red channel R, the green channel G and the blue channel B of the image, the blue channel B can best reflect the contrast of the mucosa and the blood vessels in the tissue, so in this embodiment, by making the contrast of the background information and the detailed information in the white-like image under the blue channel larger than the contrast of the background information and the detailed information in the white-light image under the blue channel in the same image region, the contrast of the mucosa and the blood vessels can be improved, the display of the hyperemia region and the fine capillary blood vessels can be enhanced, the lesion region can be more prominent, and the contrast of the lesion region and the normal mucosa can be effectively enhanced.
The white light image collected at the target site by the imaging probe of the embodiment in the white light illumination mode is only used for comparing image parameters/imaging effects with the white light-like image collected at the target site by the imaging probe in the white light-like illumination mode, so as to illustrate the imaging effects of the white light-like image of the embodiment; the present embodiment does not limit whether or when the white light image is collected during the endoscopic imaging operation.
Specifically, if the imaging effect of the white light image and the white light-like image acquired by the imaging probe at the same position, for example, the target position where the imaging probe is currently located, needs to be compared, during the endoscopic imaging operation, before or after the imaging probe is controlled to acquire the white light-like image of the target position in the white light-like illumination mode, the illumination mode may be switched to the white light illumination mode to acquire the white light image of the target position, and then the two images are compared. Of course, after the white light-like image of the target portion is acquired, the white light image of the target portion acquired before the endoscopic imaging operation is performed this time may be retrieved, and then the white light image and the endoscopic imaging operation are compared.
In addition, it is understood that the "white light illumination mode" in the present embodiment may refer to a broadband white light spectrum illumination using a xenon lamp, a white light LED lamp, or the like, or may also refer to a white light spectrum illumination synthesized by using a narrow-band light group emitted from a multi-color LED lamp, a laser lamp, or the like, as long as the acquired white light image meets the clinical application requirements (for example, the average color rendering index is greater than 90).
By applying the method provided by the embodiment of the application, the target part where the imaging probe arranged in the body is located at present is identified, and then the imaging probe is controlled to acquire the white-light-like image of the target part in the white-light-like illumination mode. The tone of the white-light-like image is similar to that of the white-light image, so that the identification by a user is facilitated, and in the same image area, the contrast ratio of the background information and the detail information in the white-light-like image under a blue channel is greater than that of the background information and the detail information in the white-light image under the blue channel. Therefore, when endoscopic imaging is carried out, the imaging tone similar to a white light image can be presented, meanwhile, the contrast ratio of mucosa and blood vessels can be improved, the display of congestion areas and fine capillary vessels is enhanced, and the focus areas are more prominent, so that the contrast ratio of the focus areas of all parts and normal mucosa is effectively enhanced, the omission ratio is reduced, and the accuracy of an inspection result is improved.
In one embodiment of the present application, step S220 may include the steps of:
the method comprises the following steps: determining a first spectrum corresponding to the target part in a white-light-like illumination mode;
step two: and controlling the imaging probe to acquire a white light-like image of the target part by using the first spectrum illumination.
For convenience of description, the above two steps are combined for illustration.
Wherein, because the cell composition at different parts is different, the optical characteristics of different types of cells have certain difference. Therefore, in order to obtain a better white-like image effect at each position, in the present embodiment, different white-like illumination spectrums can be set for different positions. Taking the digestive tract as an example, the mucous membranes of the oral cavity and the esophagus are mainly composed of thicker squamous epithelial cells which have stronger spectral reflection to short wavelengths, and in order to achieve better imaging effect, the illumination intensity of blue light and blue-violet light can be reduced; the stomach is mainly composed of columnar epithelial cells, and the absorption of the cells to short-wavelength spectrums is strong, so that the illumination intensity of blue-violet light can be increased in a targeted mode when the stomach is imaged; a large amount of bilirubin with extremely strong absorption to purple light exists in duodenum, and a white light-like illumination spectrum of bilirubin can pertinently and greatly improve blue light and blue-violet light so as to supplement information of a blue channel during imaging.
Therefore, in the embodiment, after the target part where the imaging probe placed in the body is currently located is identified, the first spectrum corresponding to the target part in the white-light-like illumination mode can be further determined. The first spectrum is a white light illumination spectrum corresponding to the target part.
Specifically, in the embodiment of the present application, the correspondence between each part and the spectrum in the white-light-like illumination mode may be obtained through historical data, experimental data, and the like, and a part spectrum correspondence table may be generated. Different sites have respective adapted spectra. After the target part where the imaging probe is located at present is identified, a first spectrum corresponding to the target part under the white-light-like illumination mode can be inquired and determined in the part spectrum corresponding table.
If the identified target part is the oral cavity and the esophagus, the corresponding first spectrum is a spectrum taking G light as a reference, the range of R/G is 0.8-1.2, and the range of UV/G is 0.9-1.3; if the identified target part is the stomach, the corresponding first spectrum is a spectrum taking G light as a reference, the range of R/G is 0.8-1.2, and the range of UV/G is 2.5-3.5; if the identified target site is the duodenum, the corresponding first spectrum is a spectrum with the range of R/G being 0.8-1.2, the range of B/G being 0.3-0.7 and the range of UV/G being 2.7-3.5 based on G light. Therein, it is understood that here (i.e. for the illumination spectrum) R stands for red light, G for green light, B for blue light, UV for ultraviolet light.
After the target part where the imaging probe placed in the body is located at present is identified, and the first spectrum corresponding to the target part in the white-light-like illumination mode is determined, the first spectrum can be used for illumination, and meanwhile, the imaging probe is controlled to acquire the white-light-like image of the target part. Namely, the white light-like image is the image acquired by the imaging probe under the first spectrum illumination.
The first spectrum is a spectrum which is adaptive to the target part in a white-light-like illumination mode, and the first spectrum is used for illumination, so that the imaging tone of a white-light-like image of the target part acquired by the imaging probe is close to that of the white-light image, the difference between a mucous membrane and a blood vessel can be highlighted, a focus area can be highlighted, and the omission ratio is reduced.
When endoscopic imaging is carried out, the adaptive white-light-like spectrum illumination is used for different parts, the optical characteristics of each part can be adapted, and the optimal white-light-like imaging effect can be achieved at each part.
In an embodiment of the present application, in order to overcome the problem of structural difference in the body and improve the accuracy of the site identification, the step S210 of identifying the target site where the imaging probe is currently located may include the following steps:
the method comprises the following steps: acquiring an image currently acquired by an imaging probe;
step two: and identifying the current target position of the imaging probe based on the currently acquired image.
For convenience of description, the above two steps are combined for illustration.
In the embodiment of the application, when the endoscopy needs to be performed, the imaging probe can be firstly extended into the body to perform image acquisition. Any spectral illumination can be used during the examination, and the imaging probe is controlled to acquire the image of the current position. Specifically, when the endoscopy is just started, the white light spectrum can be used for illumination by default, namely, the imaging probe is controlled to acquire the image of the current position in the white light illumination mode, and the currently acquired image is a white light image; during the endoscopic procedure, different illumination modes may be switched, for example, if the current mode is switched to the special light illumination mode, the acquired image is a special light image. Either a white light image or a special light image may be used to perform subsequent site recognition.
After the image currently acquired by the imaging probe is acquired, the target part where the imaging probe is currently located can be identified based on the currently acquired image.
Before the target region is identified based on the currently acquired image, the currently acquired image may be preprocessed according to a set processing manner, where the processing manner may include at least one of demosaicing, black level correction, denoising, enhancing, and compressing. The method comprises the steps of preprocessing a currently acquired image, identifying a current target part of the imaging probe based on the preprocessed image, and improving identification accuracy or efficiency. For the compression process, the image can be compressed to a resolution of 192 × 192 or less by using a resize method, so as to improve the subsequent recognition efficiency.
In a specific embodiment of the present application, the process of identifying the target site where the imaging probe is currently located based on the currently acquired image may specifically be performed according to the following steps:
the first step is as follows: inputting the currently acquired image into a part recognition network obtained by pre-training;
the second step is that: and determining the current target position of the imaging probe according to the output result of the position identification network.
In the embodiment of the application, the training data can be used for training to obtain the part recognition network. For example, the digestive tract may be trained according to a clinical digestive tract database to obtain a digestive tract site recognition network. Specifically, general models such as VGG (Visual Geometry Group Network, super-resolution test sequence), ResNet (Deep residual Network), google lenet (a Deep convolutional neural Network), MobileNetV2 (a lightweight neural Network) and the like can be used. The training database can comprise a plurality of training images, the characteristics of each training image are extracted, the part corresponding to each training image is marked, and training and reasoning of the part recognition network are carried out based on the part corresponding to each training image. The lightweight neural network model can reduce the complexity of the model and improve the reasoning speed.
After the image currently acquired by the imaging probe placed in the body is acquired, the currently acquired image can be input into a part recognition network obtained by pre-training. Or after the current acquired image is preprocessed, the preprocessed image is input into the part recognition network.
The output result of the part recognition network may be a name of a part corresponding to the input image or a probability of each part corresponding to the input image. According to the output result of the part identification network, the target part where the imaging probe is located at present can be determined. For example, the part corresponding to the output result of the part identification network can be directly determined as the target part where the imaging probe is currently located.
It should be noted that the part recognition network may be obtained by training, as training data, images acquired by the imaging probe in multiple illumination modes, so that no matter which illumination mode the image acquired by the imaging probe is currently acquired in, accurate part recognition may be performed on the image. Or, the images acquired by the imaging probe in different illumination modes can be used as training data to respectively train and acquire a plurality of part recognition networks, and after the image currently acquired by the imaging probe is acquired, the corresponding part recognition network is selected according to the illumination mode used by the imaging probe to perform part recognition.
In a specific embodiment of the present application, the number of the currently acquired images is greater than or equal to 3, if output results of the part identification network for N consecutive frames of images in the currently acquired images are all the same, a part corresponding to the output result is determined as a target part where the imaging probe is currently located, and N is greater than or equal to 3.
In order to improve the accuracy of the location identification, improve the stability of the spectral adjustment, and prevent the dimming flicker, in the embodiment of the present application, the acquired images currently acquired by the imaging probe may include consecutive multi-frame images, for example, the number of the acquired images may be greater than or equal to 3. And respectively inputting each frame of currently acquired image into the position identification network, so as to obtain an output result corresponding to each frame of image. If the output results of the part identification network for the continuous N frames of images in the currently acquired image are the same, the part corresponding to the output result can be determined as the current target part of the imaging probe. N is a positive integer greater than or equal to 3. That is to say, the part recognition network performs part recognition on a plurality of frames of images collected currently, and only if the same part is recognized continuously for a plurality of times, the current recognition result is considered to be accurate.
In another embodiment of the present application, the site recognition network may include a plurality of recognition subnetworks, and different recognition subnetworks may be trained based on different types of models. When the target position where the imaging probe is currently located is determined according to the output result of the position identification network, the output result of each identification subnetwork can be obtained, and then the position corresponding to the output result with the largest proportion is determined as the target position where the imaging probe is currently located.
The recognition accuracy of the recognizer networks obtained based on different types of model training has certain difference. After the image currently acquired by the imaging probe placed in the body is acquired, or after the currently acquired image is preprocessed, the currently acquired image can be respectively input into each recognition sub-network, so that the output result of each recognition sub-network is obtained. The output results of different identification sub-networks may be the same or different, if a certain output result accounts for a larger proportion, the output result is approved by more identification sub-networks, the accuracy degree is higher, and the part corresponding to the output result with the largest proportion can be determined as the current target part of the imaging probe.
Of course, if there are a plurality of output results with the largest ratio, corresponding to a plurality of parts, the plurality of parts may be determined as the target parts where the imaging probe is currently located, and the operations of the subsequent steps may be performed respectively to obtain a plurality of white light-like images.
In one embodiment of the present application, the method may further comprise the steps of:
the method comprises the following steps: if suspicious lesions exist in the white-light-like image, switching to other special light illumination modes;
step two: determining second spectrums corresponding to the target part under other special light illumination modes;
step three: and controlling the imaging probe to acquire a special light image of the target part by using the second spectrum illumination.
For convenience of description, the above three steps are combined for illustration.
In the embodiment of the application, the first spectrum illumination is used for controlling the imaging probe to acquire the white-light-like image of the target part, the imaging tone of the white-light-like image is close to that of the white-light image, and the difference between the mucous membrane and the blood vessel can be highlighted. The existence of suspicious lesions is conveniently determined in the white-light-like images. Therefore, in the embodiment, the lesion identification can be further performed based on the collected white-like light image, and if a suspicious lesion exists in the white-like light image, the lesion area can be finely screened by switching to other special light illumination modes, such as NBI, BLI, LCI and the like.
After switching to the other special light illumination mode, a second spectrum corresponding to the target site in the other special light illumination mode can be determined. The corresponding relation between each part and the spectrum under other special light illumination modes can be obtained through historical data, test data and the like, and recorded in the part spectrum corresponding table. The absorption and reflection properties of different parts for the spectrum have difference, and different parts have respectively adaptive spectra. After switching to other special light illumination modes, the second spectrum corresponding to the target part under other special light illumination modes can be inquired and determined in the part spectrum correspondence table.
And controlling the imaging probe to acquire a special light image of the target part by using the second spectrum illumination so as to further screen the focus area.
When the endoscope is used for endoscopic imaging, the adaptive spectrum illumination is used for different parts, so that each part can show the best imaging effect. Under the white light-like illumination mode, the imaging tone is integrally close to that under the white light illumination mode, and meanwhile, the display of a congestion area and a tiny capillary vessel can be enhanced, so that a focus area is more prominent, the omission ratio can be effectively reduced, and the disease screening capability is improved. Under the special light illumination mode, the contrast of the mucous membrane and the blood vessel can be enhanced, and the diagnosis of the nature of the lesion is easier.
For the sake of understanding, the present application will be described again by taking the adaptive endoscopic imaging system shown in fig. 3 as an example of the imaging process of the digestive tract.
The adaptive endoscopic imaging system may include an imaging probe, an image processor, an artificial intelligence analysis subsystem, a spectral adjustment subsystem, a light source master control subsystem, an illumination light source, and a display. The display is not shown in fig. 3.
After the imaging probe extends into the alimentary canal cavity, image acquisition can be carried out, so that the image currently acquired by the imaging probe can be acquired. Such as an imaging probe, may acquire images in a white light illumination mode.
The image processor performs pre-processing on the currently acquired image, such as a series of image processing including demosaicing, black level correction, denoising, enhancement, etc., and then compresses the image to a resolution of 192 × 192 or less by using a resize method. The purpose of image compression is to reduce the calculation parameters of the artificial intelligence model and improve the model training and reasoning efficiency.
The artificial intelligence analysis subsystem carries out position recognition on the image processed by the image processor, if the position recognition is carried out through a position recognition network, after the position recognition, a command can be sent to the spectrum adjustment subsystem through a serial port communication protocol, and one-time light adjustment is completed.
And the spectrum adjusting subsystem determines a corresponding spectrum according to the identified part and performs spectrum adjustment. Specifically, the lighting spectrum corresponding to the identified portion may be called in a form of table lookup or the like, and the light source main control subsystem is notified to control the lighting source to complete the switching of the lighting spectrum.
The illumination source may be composed of a multi-wavelength LED, for example, a four-wavelength LED, whose wavelength ranges are: 400nm to 435nm, 440nm to 485nm, 520nm to 580nm, and 600nm to 660 nm. The driving current of each LED in the lighting source can be independently controlled by the light source main control subsystem.
Fig. 4 is a schematic comparison diagram of a white light image (left) and a white-like light image (right) of a human oral cavity, and it can be found that the overall color tone of the white-like light image collected in the white-like light illumination mode is close to the color tone of the white light image collected in the white light illumination mode, and the white-like light image has a better effect on the highlighting of the capillary vessels, as shown in a rectangular frame. Fig. 5 is a schematic diagram showing a comparison between a white light image (left) and a white-like light image (right) of a pig stomach, and it can be found that the white-like light image acquired in the white-like light illumination mode has a better effect on highlighting a congestion lesion, as shown in a rectangular frame.
In accordance with the above method embodiments, the present application further provides an endoscopic imaging apparatus, which is operated in an endoscopic imaging device, the endoscopic imaging device includes an imaging probe that can be placed in a body, and the endoscopic imaging apparatus described below and the endoscopic imaging method described above are referred to in correspondence.
Referring to fig. 6, the apparatus may include the following modules:
the part identification module 610 is used for identifying a target part where the imaging probe is located currently;
the image acquisition module 620 is used for controlling the imaging probe to acquire a white-light-like image of the target part in a white-light-like illumination mode;
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image collected by the imaging probe at the target part in the white-light illumination mode is greater than or equal to a set similarity threshold;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under the blue channel is greater than that of the background information and the detail information in the white-light-like image under the blue channel.
By applying the device provided by the embodiment of the application, the target part where the imaging probe arranged in the body is located at present is identified, and then the imaging probe is controlled to acquire the white-light-like image of the target part in the white-light-like illumination mode. The tone of the white-light-like image is similar to that of the white-light image, so that the identification by a user is facilitated, and in the same image area, the contrast ratio of the background information and the detail information in the white-light-like image under a blue channel is greater than that of the background information and the detail information in the white-light image under the blue channel. Therefore, when endoscopic imaging is carried out, imaging tones similar to white light images can be presented, meanwhile, the contrast of the focus area and the normal mucous membrane of each part can be effectively enhanced, the omission ratio is reduced, and the accuracy of the inspection result is improved.
In one embodiment of the present application, the similarity threshold is set to be greater than or equal to 80%.
In one embodiment of the present application, a difference between R/G of the white-like image and R/G of the white image is within a first preset deviation range, and a difference between B/G of the white-like image and B/G of the white image is within a second preset deviation range; R/G is the ratio of the red channel to the green channel; B/G is the ratio of the blue channel to the green channel.
In one embodiment of the present application, the first preset deviation range is determined according to 10% of R/G of the white light image, and the second preset deviation range is determined according to 10% of B/G of the white light image.
In one embodiment of the present application, the image capturing module 620 includes:
the spectrum determining unit is used for determining a first spectrum corresponding to the target part in the white-light-like illumination mode;
and the image acquisition unit is used for controlling the imaging probe to acquire a white light-like image of the target part by using the first spectrum illumination.
In an embodiment of the application, the spectral determination unit is configured to:
and inquiring and determining a first spectrum corresponding to the target part in the white light-like illumination mode in the part spectrum corresponding table.
In one embodiment of the present application, the location identification module 610 is configured to:
acquiring an image currently acquired by an imaging probe;
and identifying the current target position of the imaging probe based on the currently acquired image.
In a specific embodiment of the present application, the identifying the target region where the imaging probe is currently located by the region identifying module 610 based on the currently acquired image specifically includes:
inputting the currently acquired image into a part recognition network obtained by pre-training;
and determining the current target position of the imaging probe according to the output result of the position identification network.
In one embodiment of the present application, the identifying the target region where the imaging probe is currently located by the region identifying module 610 based on the currently acquired image further includes:
before inputting the currently acquired image into a pre-trained part recognition network, preprocessing the currently acquired image according to a set processing mode, wherein the processing mode comprises at least one of demosaicing, black level correction, denoising, enhancement and compression.
In a specific embodiment of the present application, the number of the currently acquired images is greater than or equal to 3, and the determining, by the part identification module 610, the current target part of the imaging probe according to the output result of the part identification network specifically includes:
under the condition that the output results of the part identification network for the continuous N frames of images in the currently acquired images are the same, the part corresponding to the output result is determined as the current target part of the imaging probe, and N is greater than or equal to 3.
In an embodiment of the present application, the part identification network includes a plurality of sub-identification networks, and the determining, by the part identification module 610, a target part where the imaging probe is currently located according to an output result of the part identification network specifically includes:
obtaining an output result for each recognition sub-network;
and determining the part corresponding to the output result with the largest ratio as the current target part of the imaging probe.
In an embodiment of the present application, the image capturing module 620 is further configured to:
under the condition that suspicious focuses exist in the white-light-like image, switching to other special light illumination modes;
determining second spectrums corresponding to the target part under other special light illumination modes;
and controlling the imaging probe to acquire a special light image of the target part by using the second spectrum illumination.
It is to be understood that the above-described embodiments of the apparatus are merely illustrative, and the modules illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Corresponding to the above method embodiments, the present application further provides an endoscopic imaging apparatus, including:
an imaging probe 14 that is implantable in the body to acquire images;
an illumination light source 15 for providing illumination light to the imaging probe;
a memory 11 for storing a computer program;
the processor 10, which is respectively connected to the imaging probe 14, the illumination light source 15 and the memory 11 in communication, is configured to implement the endoscopic imaging method according to any of the above embodiments when executing the computer program, for example, implement the endoscopic imaging method as shown in fig. 2.
Specifically, as shown in fig. 7, which is a schematic view of a constituent structure of an endoscopic imaging apparatus, the endoscopic imaging apparatus may include: a processor 10, a memory 11, a communication interface 12, a communication bus 13, an imaging probe 14, and an illumination source 15. The processor 10, the memory 11, the communication interface 12, the imaging probe 14 and the illumination source 15 all communicate with each other via a communication bus 13.
In the embodiment of the present application, the processor 10 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array or other programmable logic device, etc.
The processor 10 may invoke a program stored in the memory 11, and in particular, the processor 10 may perform operations in embodiments of the endoscopic imaging method.
The memory 11 is used for storing one or more computer programs, the computer programs may include program codes, the program codes include computer operation instructions, in the embodiment of the present application, the memory 11 stores at least the computer programs for realizing the following functions:
identifying a target part where an imaging probe is currently located;
controlling an imaging probe to acquire a white-light-like image of a target part in a white-light-like illumination mode;
the difference value between the R/G of the white-light-like image and the R/G of the white-light image collected by the imaging probe at the target part in the white-light illumination mode is within a first preset deviation range, and the difference value between the B/G of the white-light-like image and the B/G of the white-light image is within a second preset deviation range, so that the similarity between the color tone of the white-light-like image and the color tone of the white-light image is greater than a set similarity threshold; R/G is the ratio of the red channel to the green channel; B/G is the ratio of the blue channel to the green channel;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under the blue channel is greater than that of the background information and the detail information in the white-light-like image under the blue channel.
In one possible implementation, the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an image recognition function and a spectrum adjustment function), and the like; the storage data area may store data created during use, such as image data, spectral data, and the like.
Further, the memory 11 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 12 may be an interface of a communication module for connecting with other devices or systems.
It should be noted, of course, that the structure shown in fig. 7 does not constitute a limitation of the endoscopic imaging device in the embodiments of the present application, and that the endoscopic imaging device may include more or less components than those shown in fig. 7, or some combination of components in practical applications.
In accordance with the above method embodiments, the present application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the above endoscopic imaging method.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The principle and the implementation of the present application are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
Claims (15)
1. An endoscopic imaging method applied to an endoscopic imaging apparatus including an imaging probe that is implantable in a body, the endoscopic imaging method comprising:
identifying a target part where the imaging probe is currently located;
controlling the imaging probe to acquire a white light-like image of the target part in a white light-like illumination mode;
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image collected by the imaging probe at the target part in the white-light illumination mode is greater than or equal to a set similarity threshold;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under a blue channel is greater than the contrast of the background information and the detail information in the white-light-like image under the blue channel.
2. The endoscopic imaging method according to claim 1, wherein said set similarity threshold is greater than or equal to 80%.
3. The endoscopic imaging method according to claim 2, wherein a difference between R/G of background information of the white-light-like image and R/G of background information of the white-light image is within a first preset deviation range;
the difference value between the B/G of the background information of the white-light-like image and the B/G of the background information of the white-light-like image is within a second preset deviation range;
the R/G is the ratio of the red channel to the green channel; and the B/G is the ratio of the blue channel to the green channel.
4. The endoscopic imaging method according to claim 3, wherein said first preset deviation range is determined from 10% of the R/G of said white light image and said second preset deviation range is determined from 10% of the B/G of said white light image.
5. The endoscopic imaging method according to claim 1, wherein said controlling the imaging probe to acquire a white-light-like image of the target site in a white-light-like illumination mode comprises:
determining a first spectrum corresponding to the target site in a white-like light illumination mode;
and controlling the imaging probe to acquire a white light-like image of the target part by using the first spectrum illumination.
6. The endoscopic imaging method according to claim 5, wherein said determining a first spectrum corresponding to the target site in a white-like illumination mode comprises:
and inquiring and determining a first spectrum corresponding to the target part in the white light-like illumination mode in the part spectrum corresponding table.
7. The endoscopic imaging method according to claim 1, wherein said identifying a target site at which the imaging probe is currently located comprises:
acquiring an image currently acquired by the imaging probe;
and identifying a target part where the imaging probe is currently located based on the currently acquired image.
8. The endoscopic imaging method according to claim 7, wherein said identifying a target site at which the imaging probe is currently located based on the currently acquired image comprises:
inputting the currently acquired image into a part recognition network obtained by pre-training;
and determining the current target position of the imaging probe according to the output result of the position identification network.
9. The endoscopic imaging method according to claim 8, prior to said inputting said currently acquired image into a pre-trained site recognition network, further comprising:
and preprocessing the currently acquired image according to a set processing mode, wherein the processing mode comprises at least one mode of demosaicing, black level correction, denoising, enhancement and compression.
10. The endoscopic imaging method according to claim 8, wherein the number of currently acquired images is greater than or equal to 3, and the determining the target site where the imaging probe is currently located according to the output result of the site recognition network comprises:
and if the output results of the part identification network for the continuous N frames of images in the currently acquired image are the same, determining the part corresponding to the output result as the current target part of the imaging probe, wherein N is greater than or equal to 3.
11. The endoscopic imaging method according to claim 8, wherein said site recognition network comprises a plurality of sub-networks, and said determining the target site where said imaging probe is currently located according to the output of said site recognition network comprises:
obtaining an output result for each recognition sub-network;
and determining the part corresponding to the output result with the largest proportion as the current target part of the imaging probe.
12. The endoscopic imaging method according to any one of claims 1 to 11, further comprising:
if suspicious lesions exist in the white-light-like image, switching to other special light illumination modes;
determining a second spectrum corresponding to the target site in the other special light illumination modes;
and controlling the imaging probe to acquire a special light image of the target part by using the second spectrum illumination.
13. An endoscopic imaging apparatus for use in an endoscopic imaging device including an imaging probe that is implantable in a body, the endoscopic imaging apparatus comprising:
the part identification module is used for identifying a target part where the imaging probe is located currently;
the image acquisition module is used for controlling the imaging probe to acquire a white light-like image of the target part in a white light-like illumination mode;
the similarity between the color tone of the background information in the white-light-like image and the color tone of the background information in the white-light image collected by the imaging probe at the target part in the white-light illumination mode is greater than or equal to a set similarity threshold;
and the number of the first and second electrodes,
in the same image area, the contrast of the background information and the detail information in the white-light-like image under a blue channel is greater than the contrast of the background information and the detail information in the white-light-like image under the blue channel.
14. An endoscopic imaging apparatus, comprising:
an imaging probe that is implantable in a body to acquire an image;
an illumination light source for providing illumination light to the imaging probe;
a memory for storing a computer program;
a processor communicatively connected to the imaging probe, the illumination source and the memory, respectively, for implementing the endoscopic imaging method according to any one of claims 1 to 12 when executing the computer program.
15. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the endoscopic imaging method according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110078260.0A CN112386213B (en) | 2021-01-20 | 2021-01-20 | Endoscopic imaging method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110078260.0A CN112386213B (en) | 2021-01-20 | 2021-01-20 | Endoscopic imaging method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112386213A true CN112386213A (en) | 2021-02-23 |
CN112386213B CN112386213B (en) | 2021-05-14 |
Family
ID=74624959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110078260.0A Active CN112386213B (en) | 2021-01-20 | 2021-01-20 | Endoscopic imaging method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112386213B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113388500A (en) * | 2021-06-01 | 2021-09-14 | 南京大学 | Cell culture monitoring system and method capable of being used under microgravity |
CN116681681A (en) * | 2023-06-13 | 2023-09-01 | 富士胶片(中国)投资有限公司 | Endoscopic image processing method, device, user equipment and medium |
CN117058435A (en) * | 2022-06-30 | 2023-11-14 | 深圳开立生物医疗科技股份有限公司 | Inspection part identification method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160223807A1 (en) * | 2015-01-29 | 2016-08-04 | Fujifilm Corporation | Light source device for endoscope, endoscope system, and method for operating light source device for endoscope |
CN106714660A (en) * | 2015-02-04 | 2017-05-24 | 奥林巴斯株式会社 | Endoscope device |
CN106999028A (en) * | 2014-12-09 | 2017-08-01 | 索尼公司 | Lighting device, the method and image-taking system for controlling lighting device |
CN109497955A (en) * | 2018-12-18 | 2019-03-22 | 聚品(上海)生物科技有限公司 | Human body spontaneous fluorescent illumination excitation and image processing system and method |
CN211355358U (en) * | 2019-12-24 | 2020-08-28 | 广东欧谱曼迪科技有限公司 | Electronic soft lens with fluorescence and narrow-band spectral imaging technology |
-
2021
- 2021-01-20 CN CN202110078260.0A patent/CN112386213B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106999028A (en) * | 2014-12-09 | 2017-08-01 | 索尼公司 | Lighting device, the method and image-taking system for controlling lighting device |
US20160223807A1 (en) * | 2015-01-29 | 2016-08-04 | Fujifilm Corporation | Light source device for endoscope, endoscope system, and method for operating light source device for endoscope |
CN106714660A (en) * | 2015-02-04 | 2017-05-24 | 奥林巴斯株式会社 | Endoscope device |
CN109497955A (en) * | 2018-12-18 | 2019-03-22 | 聚品(上海)生物科技有限公司 | Human body spontaneous fluorescent illumination excitation and image processing system and method |
CN211355358U (en) * | 2019-12-24 | 2020-08-28 | 广东欧谱曼迪科技有限公司 | Electronic soft lens with fluorescence and narrow-band spectral imaging technology |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113388500A (en) * | 2021-06-01 | 2021-09-14 | 南京大学 | Cell culture monitoring system and method capable of being used under microgravity |
CN113388500B (en) * | 2021-06-01 | 2024-03-12 | 南京大学 | Cell culture monitoring system and method applicable to microgravity |
CN117058435A (en) * | 2022-06-30 | 2023-11-14 | 深圳开立生物医疗科技股份有限公司 | Inspection part identification method and device, electronic equipment and storage medium |
CN117058435B (en) * | 2022-06-30 | 2024-05-17 | 深圳开立生物医疗科技股份有限公司 | Inspection part identification method and device, electronic equipment and storage medium |
CN116681681A (en) * | 2023-06-13 | 2023-09-01 | 富士胶片(中国)投资有限公司 | Endoscopic image processing method, device, user equipment and medium |
CN116681681B (en) * | 2023-06-13 | 2024-04-02 | 富士胶片(中国)投资有限公司 | Endoscopic image processing method, device, user equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN112386213B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112386213B (en) | Endoscopic imaging method, device, equipment and storage medium | |
JP5271062B2 (en) | Endoscope apparatus and method of operating the same | |
EP3106079B1 (en) | Image capturing system and electronic endoscope system | |
US8657737B2 (en) | Electronic endoscope system, an electronic endoscope processor, and a method of acquiring blood vessel information | |
CN112218570B (en) | Image processing device, endoscope system, and image processing method | |
US10251538B2 (en) | Endoscope system and method for controlling the same | |
EP1576920B1 (en) | Imaging device | |
JP3586157B2 (en) | Subject observation device | |
WO2006025334A1 (en) | Endoscope | |
CN112105284B (en) | Image processing device, endoscope system, and image processing method | |
JP2012213612A (en) | Electronic endoscope system, and calibration method of the same | |
JP2002336196A (en) | Endoscopic equipment | |
CN113498323A (en) | Medical image processing device, processor device, endoscope system, medical image processing method, and program | |
JP7350954B2 (en) | Endoscopic image processing device, endoscope system, operating method of endoscopic image processing device, endoscopic image processing program, and storage medium | |
US11450079B2 (en) | Endoscopic image learning device, endoscopic image learning method, endoscopic image learning program, and endoscopic image recognition device | |
JP5766773B2 (en) | Endoscope system and method for operating endoscope system | |
CN110731748B (en) | Electronic endoscope | |
JP5631757B2 (en) | Electronic endoscope system | |
JP6926242B2 (en) | Electronic Endoscope Processor and Electronic Endoscope System | |
WO2020017211A1 (en) | Medical image learning device, medical image learning method, and program | |
CN118021243B (en) | Single-path double-spectrum real-time endoscope device based on depth network reconstruction | |
CN117338223B (en) | Endoscope device | |
CN118697263A (en) | Endoscope apparatus | |
WO2019220583A1 (en) | Endoscope device, endoscope device operation method, and program | |
CN117338223A (en) | Endoscope device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |