CN117689591A - White balance correction method, device and storage medium - Google Patents

White balance correction method, device and storage medium Download PDF

Info

Publication number
CN117689591A
CN117689591A CN202211040871.7A CN202211040871A CN117689591A CN 117689591 A CN117689591 A CN 117689591A CN 202211040871 A CN202211040871 A CN 202211040871A CN 117689591 A CN117689591 A CN 117689591A
Authority
CN
China
Prior art keywords
target
image
white balance
pixel point
gain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211040871.7A
Other languages
Chinese (zh)
Inventor
贺光琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Huiying Technology Co ltd
Original Assignee
Hangzhou Haikang Huiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Huiying Technology Co ltd filed Critical Hangzhou Haikang Huiying Technology Co ltd
Priority to CN202211040871.7A priority Critical patent/CN117689591A/en
Publication of CN117689591A publication Critical patent/CN117689591A/en
Pending legal-status Critical Current

Links

Landscapes

  • Endoscopes (AREA)

Abstract

The application discloses a white balance correction method, a white balance correction device and a storage medium, which relate to the technical field of image processing and are used for carrying out white balance correction on an image shot by an endoscope and improving the color accuracy of the image. The method comprises the following steps: acquiring a target image; the target image is an image in an image sequence shot by the endoscope; identifying a target object in the target image; the target object is a medical auxiliary tool, and the color of the target object is white or gray; determining a target white balance parameter according to pixel information of a target area where the target object is located; and carrying out white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameters.

Description

White balance correction method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a white balance correction method, apparatus, and storage medium.
Background
The endoscope is the most convenient, direct and effective medical instrument for medical staff to observe pathological tissues in human body so far, and has the advantages of high image definition, vivid color, easy operation and the like.
Among them, the image sharpness and color fidelity depend largely on the image sensor, and whether the color presented by the image sensor is correctly determined by various factors such as the light source, RGB gain setting, etc. Due to the processing mechanisms of the human brain, a white target appears nearly white under any light source; if the camera wants to achieve the same purpose, the effect of the shot image is ensured to be consistent with the actual effect seen by human eyes, and white balance correction is needed.
In the related art, the white balance mode adopted by the endoscope is in-vitro manual white balance. Specifically, during operation, the lens of the endoscope is firstly shot aiming at a white card (or a white object), white balance correction is carried out, and parameters of the white balance are stored and then the endoscope is used in a body. On one hand, the in-vitro manual white balance correction requires manual operation by a user through keys, so that the method is inconvenient and the user can easily ignore the step; on the other hand, since the in-vivo environment is different from the in-vitro environment (the in-vivo environment is reddish), the white balance parameter determined by performing white balance correction based on the in-vitro environment is not suitable for the in-vivo environment.
Therefore, there is a need for a white balance correction method for an endoscope, which can improve the accuracy of image color.
Disclosure of Invention
The application provides a white balance correction method, a white balance correction device and a storage medium, which are used for performing white balance correction on an image shot by an endoscope and improving the accuracy of image colors.
In a first aspect, the present application provides a white balance correction method applied to an endoscope; the method comprises the following steps: acquiring a target image; the target image is an image in an image sequence shot by the endoscope; identifying a target object in the target image; the target object is a medical auxiliary tool, and the color of the target object is white or gray; determining a target white balance parameter according to pixel information of a target area where a target object is located; and carrying out white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameters.
Based on the technical scheme provided by the application, the following beneficial effects can be at least generated: by acquiring a target image (the target image is an image in an image sequence captured by an endoscope), and identifying a target object (the target object is white or gray in color, for example, the target object may be a medical aid such as gauze, a bandage, a scalpel, or the like) from the target image; and further, determining a target white balance parameter according to pixel information of a target object in the target image, and performing white balance correction on images in an image sequence shot by the endoscope according to the target white balance parameter. It can be understood that, compared with the method of performing white balance correction by a user in vitro in the prior art, the embodiment of the application identifies a target object (such as gauze, bandage, scalpel, etc.) with white or gray color in an image shot in vivo by an endoscope, determines white balance parameters according to pixel information of the target object, and performs white balance correction; on one hand, the whole correction process is automatically carried out without human intervention, so that the workload of a user is reduced; on the other hand, the problem that the white balance correction is inaccurate due to the change of the environment (for example, the color cast problem exists in the internal cavity, and the white balance parameters determined in vitro are not applicable in the body) in the process from the outside to the inside can be avoided, and the color accuracy of the image shot by the endoscope in the body is effectively improved.
In one possible implementation manner, the pixel information includes a gain of a color channel of the pixel point, and the determining the target white balance parameter according to the pixel information of the target area where the target object is located includes: and determining a target white balance parameter according to the gain of the color channel of the pixel point in the target area.
In another possible implementation manner, the pixel points in the target area include: a target pixel point; the target pixel point is any pixel point in the target area; the method further comprises the following steps: determining the saturation of the target pixel point according to the color component value of the color channel of the target pixel point; determining a weight value of the target pixel point according to the saturation of the target pixel point; the weight value is used for reflecting the probability that the saturation of the target pixel point occurs in the target area; and determining the gain of the color channel of the pixel point in the target area according to the color component value of the color channel of the target pixel point and the weight value of the target pixel point.
In another possible implementation manner, the color channel of the pixel includes: an R channel and a B channel; the target white balance parameters include: a first gain and a second gain; the first gain is used for correcting the color component of the R channel of the image; the second gain is used for correcting the color component of the B channel of the image; the determining the target white balance parameter according to the gain of the color channel of the pixel point in the target area includes: taking the gain of the R channel of the pixel point in the target area as a first gain; the gain of the B channel of the pixel point in the target area is taken as a second gain.
In another possible implementation manner, the performing white balance correction on the images in the image sequence captured by the endoscope according to the target white balance parameter includes: the color component value of the R channel of each pixel point in the image to be corrected is multiplied by a first gain, the color component value of the B channel is multiplied by a second gain, and white balance correction is carried out on the image to be corrected; the image to be corrected is any one image in an image sequence shot by the endoscope.
In another possible implementation manner, the identifying the target object in the target image includes: an undyed object is identified from the target image as a target object.
In another possible implementation manner, the identifying the target object in the target image includes: performing Image Signal Processing (ISP) operation on the target image to obtain a processed target image; and identifying the target object from the processed target image.
In a second aspect, the present application provides a white balance correction device, for use with an endoscope, comprising: the acquisition module is used for acquiring a target image; the target image is an image in an image sequence shot by the endoscope; the identification module is used for identifying a target object in the target image; the target object is a medical auxiliary tool, and the color of the target object is white or gray; the determining module is used for determining a target white balance parameter according to the pixel information of the target area where the target object is located; and the correction module is used for carrying out white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameter.
In one possible implementation manner, the pixel information includes a gain of a color channel of the pixel point, and the determining module is specifically configured to determine the target white balance parameter according to the gain of the color channel of the pixel point in the target area.
In another possible implementation, the pixel points in the target area include: a target pixel point; the target pixel point is any pixel point in the target area; the determining module is further configured to determine saturation of the target pixel according to a color component value of a color channel of the target pixel; determining a weight value of the target pixel point according to the saturation of the target pixel point; the weight value is used for reflecting the probability that the saturation of the target pixel point occurs in the target area; and determining the gain of the color channel of the pixel point in the target area according to the color component value of the color channel of the target pixel point and the weight value of the target pixel point.
In another possible implementation manner, the color channel of the pixel includes: an R channel and a B channel; the target white balance parameters include: a first gain and a second gain; the first gain is used for correcting the color component of the R channel of the image; the second gain is used for correcting the color component of the B channel of the image; the determining module is specifically configured to take a gain of an R channel of a pixel point in the target area as a first gain; the gain of the B channel of the pixel point in the target area is taken as a second gain.
In another possible implementation manner, the correction module is specifically configured to multiply a color component value of an R channel of each pixel point in the image to be corrected by a first gain, a color component value of a B channel by a second gain, and correct white balance of the image to be corrected; the image to be corrected is any one image in an image sequence shot by the endoscope.
In another possible implementation manner, the identifying module is specifically configured to identify an undyed object from the target image as the target object.
In another possible implementation manner, the identification module is specifically configured to perform an ISP operation on the target image to obtain a processed target image; and identifying the target object from the processed target image.
In a third aspect, the present application provides a white balance correction device, including: one or more processors; one or more memories; wherein the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the white balance correction apparatus to perform any of the white balance correction methods provided in the first aspect above.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer-executable instructions that, when run on a computer, cause the computer to perform any one of the white balance correction methods provided in the first aspect above.
The description of the second aspect to the fourth aspect in the present application may refer to the detailed description of the first aspect; also, the advantageous effects described in the second aspect to the fourth aspect may refer to the advantageous effect analysis of the first aspect, and are not described herein.
Drawings
FIG. 1 is a schematic view of an endoscope system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a white balance correction method according to an embodiment of the present application;
fig. 3 is a schematic diagram of an image processing flow according to an embodiment of the present application
Fig. 4 is a second flowchart of a white balance correction method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a target object recognition model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target area according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an image block according to an embodiment of the present application;
FIG. 8 is a flowchart of a training method of a target object recognition model according to an embodiment of the present application;
FIG. 9 is a schematic diagram of sample image calibration according to an embodiment of the present application;
FIG. 10 is a second flowchart of a training method of a target object recognition model according to an embodiment of the present application;
FIG. 11 is a flowchart III of a training method of a target object recognition model according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a white balance correction device according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram of a white balance correction device according to a second embodiment of the present application.
Detailed Description
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or for distinguishing between different processes of the same object and not for describing a particular sequential order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
As described in the background art, the endoscope is the most convenient, direct and effective medical instrument for medical staff to observe pathological tissues in human body so far, and has the advantages of high image definition, vivid color, easy operation and the like.
The image sharpness and color fidelity are largely dependent on the image sensor, and whether the color presented by the image sensor is correctly dependent on various factors such as the light source, RGB gain settings, etc. Due to the processing mechanisms of the human brain, a white target appears nearly white under any light source; if the camera wants to achieve the same purpose, the effect of the shot image is ensured to be consistent with the actual effect seen by human eyes, and white balance correction is needed.
In the related art, the white balance mode adopted by the endoscope is in-vitro manual white balance. Specifically, during operation, the lens of the endoscope is firstly shot aiming at a white card (or a white object), white balance correction is carried out, and parameters of the white balance are stored and then the endoscope is used in a body.
However, the related art has problems in that, on one hand, the in-vitro manual white balance correction requires manual operation by a user through a key, so that the operation is inconvenient and the user easily ignores the step; on the other hand, since the in-vivo environment is different from the in-vitro environment (the in-vivo environment is reddish), the white balance parameter determined by performing white balance correction in the in-vitro environment is not suitable for the in-vivo environment.
Therefore, there is a need for a white balance correction method for an endoscope, which can improve the accuracy of image color.
Aiming at the technical problems, the embodiment of the application provides a white balance correction method, which is characterized in that: by acquiring a target image (the target image is an image in an image sequence captured by an endoscope), and identifying a target object (the target object is white or gray in color, for example, the target object may be a medical aid such as gauze, a bandage, a scalpel, or the like) from the target image; and further, determining a target white balance parameter according to pixel information of a target object in the target image, and performing white balance correction on images in an image sequence shot by the endoscope according to the target white balance parameter. It can be understood that, compared to the method of determining the white balance parameter by performing white balance correction by the user based on the external environment in the prior art, in the embodiment of the present application, the endoscope identifies a target object (such as gauze, bandage, scalpel, etc.) with white or gray color in an image captured in vivo, determines the white balance parameter according to the pixel information of the target object, and performs white balance correction; on one hand, the whole correction process is automatically carried out without human intervention, so that the workload of a user is reduced; on the other hand, the problem of inaccurate white balance correction (for example, the color cast problem of an in-vivo cavity exists, the white balance parameter determined based on the in-vitro environment is not applicable in the in-vivo environment) caused by the change of the in-vivo and in-vitro environments can be avoided, and the color accuracy of an image shot by the endoscope in the in-vivo is effectively improved.
The embodiments provided in the present application are specifically described below with reference to the drawings attached to the specification.
Referring to fig. 1, an endoscope system 100 includes: an endoscope 101, a light source device 102, an imaging system main unit 103, and a display device 104.
An endoscope 101 for acquiring image information of a subject.
Alternatively, the endoscope 101 may be inserted into a living body to photograph a site inside the living body to be observed. Alternatively, the endoscope 101 may transmit the photographed image information to the display device 104 so as to display the image information of the subject in real time. Alternatively, the endoscope 101 may transmit the captured image to the imaging system host 103 for further processing of the captured image.
The light source device 102 is connected to the endoscope 101 for emitting illumination light so that the endoscope 101 captures a clear image. Alternatively, the light source device 102 may emit white light for visible light imaging. Alternatively, the light source device 102 may emit excitation light such that when an excitation light reagent (e.g., a fluorescent developing reagent) is dispersed or injected into a target site of a living body, the target site generates fluorescence.
The imaging system host 103 is configured to receive the transmitted image of the endoscope 101, process the image, and transmit the processed image to the display device 104. In addition, the camera system host 103 is further used for executing the white balance correction method provided by the application, so that the problem of color cast of the image in the cavity is solved, and the accuracy of the color of the image is improved.
Optionally, the camera system host 103 is further used to control the entire endoscope system 100, for example, to control the endoscope 101 to transmit the acquired image to the camera system host 103; for another example, the light source device 102 is controlled to turn on or off the light source. It should be appreciated that the camera system host 103 may generate operation control signals, based on the command operation code and the timing signals, to instruct the endoscope system 100 to execute the means for controlling the command. In addition, the image capturing system host 103 itself has an image processing function, or the image capturing system host 103 is integrated with other devices having image processing.
It should be understood that the camera system host 103 shown in fig. 1 is merely an example, and the specific form of existence of the camera system host 103 is not limited in the present application. Illustratively, the camera system host 103 may be a server; alternatively, the camera system host 103 may be a central processing unit (central processing unit, CPU), an image processing unit (Graphics Processing Unit, GPU), a general purpose processor network processor (network processor, NP), a digital signal processor (digital signal processing, DSP), a microprocessor, a microcontroller, a programmable logic device (programmable logic device, PLD), or any combination thereof. The camera system host 103 may also be other devices with processing functions, such as a circuit, a device, or a software module, which is not limited in this application.
The display device 104 is configured to receive the processed image information transmitted from the imaging system host 103, and display the processed image information on the display device 104.
Alternatively, the display device 104 may be used for image information of the subject acquired by the endoscope 101, and the image information may be directly displayed on the display device 104.
The display device 104 may be, for example, a liquid crystal display, an organic light-emitting diode (OLED) display. The particular type, size, resolution, etc. of display device 104 is not limited, and those skilled in the art will appreciate that display device 104 may be modified in performance and configuration as desired.
Although not shown in fig. 1, the endoscope system 100 may further include a power supply device (such as a battery and a power management chip) that supplies power to the respective components, and the battery may be logically connected to the image pickup system host 103 through the power management chip, thereby performing functions such as power consumption management of the endoscope system 100 through the power supply device.
The white balance correction method provided by the embodiment of the application is applied to a scene of image acquisition based on an endoscope system, for example, the white balance correction of the image is performed in real time in the process of image acquisition. Alternatively, the white balance correction method provided by the application can be applied to other white balance correction scenes. And will not be illustrated herein.
The main body of the white balance correction method provided in the present application is not limited, and for example, the method may be executed by the endoscope system itself, by the imaging system host, by an external device, or by any device having an image processing function and an instruction control function. The following embodiments are described in terms of the method being performed by a computer device.
The white balance correction method provided in the present application is specifically described below with reference to the accompanying drawings.
The embodiment of the application provides a white balance correction method which can be executed by an image pickup system host as shown in fig. 1. As shown in fig. 2, the method comprises the steps of:
s101, acquiring a target image.
In some embodiments, the target image is an image in a sequence of images taken by the endoscope (e.g., the target image may be an image in a sequence of images taken by the endoscope in vivo; or the target image may be an image in a sequence of images taken by the endoscope during its passage from outside the body into the body). Where the sequence of images taken by the endoscope refers to a sequence of images in a video stream taken by the endoscope (it will be appreciated that the video stream may consist of one or more video sequences).
In some embodiments, the camera system host acquires the target image in real-time. For example, during an operation or a physical examination, if a doctor needs to observe the ulcer or tumor of the internal tissue of a person to be tested in real time by means of an endoscope system, the host computer of the camera system acquires the video stream shot by the endoscope in the body in real time, and intercepts the target image from the video stream.
In other embodiments, the camera system host acquires the target image from a storage device of the endoscope system. For example, in a non-surgical process, if a doctor needs to repeatedly watch or study a video stream photographed by an endoscope in a human body, so as to more clearly understand a pathological condition of an internal tissue structure of a person to be tested, and further make a corresponding surgical scheme, a host computer of the camera system obtains the video stream from a storage device storing the video stream, and intercepts a target image from the video stream.
S102, identifying a target object in the target image.
Wherein the target object is a medical aid and the color of the target object itself is white or gray. Illustratively, the target object includes, but is not limited to, gauze, bandage, cotton ball, surgical knife, forceps, scissors or needle, and the specific type of target object may be set according to the actual detection scenario.
It should be noted that, in the embodiment of the present application, the object to be detected is an object whose own color is known to be white or gray, and is not an object to be detected in white (i.e., the object is not limited thereto, and only the color of the object is detected).
In some embodiments, the step S102 may be implemented as: a target object is identified from the target image. I.e. the identification of the target object based on the target image is performed directly.
In other embodiments, the step S102 may be further implemented as: performing image signal processing ISP (Image Signal Processor, ISP) operation on the target image to obtain a processed target image; and identifying the target object from the processed target image.
Exemplary, as shown in fig. 3, ISP operations include performing operations of black level, dead pixel correction, white balance correction, color interpolation, gamma correction (Gamma correction), color correction, RGB to YUV, noise reduction, sharpening, and the like on an image.
It can be understood that performing ISP operation on the target image can enhance the image quality, so that the processed target image is clearer, and thus, according to the processed target image and the target object recognition model, the detection of the target object is performed, so that the detection precision can be improved, and the detected position information of the target object is more accurate.
In some embodiments, as shown in fig. 4, the step S102 may be implemented as follows:
s1021, detecting whether a target object exists in the target image based on the target image and the target object identification model.
The target object recognition model is used for recognizing a target object from the target image and determining position information of the target object in the target image.
In some embodiments, the target object recognition model may be any type of target recognition network based on a deep learning algorithm. In a possible implementation manner, the target object is gauze, and the target object identification model is a gauze identification model; the inputting the target image into the target object recognition model, detecting whether the target object exists in the target image, includes: inputting the target image into a gauze recognition model, and detecting whether gauze exists in the target image.
In some embodiments, the target object recognition model includes: a feature extraction network and a detection head network. The feature extraction network is used for extracting feature information of the target image, and the detection head network is used for performing operations such as pooling and regression on the feature information of the target image, so as to determine whether a target object exists in the target image, and if the target object exists in the target image, determine location information of a target area where the target object exists (i.e. execute the following step S1022).
For example, the target object recognition model may be as shown in fig. 5, wherein a region generation network (Region Proposal Network, RPN) is used to generate a plurality of object candidate boxes based on feature information of the target image; region of interest pooling (Region Of Interest Pooling, ROI) is used to pool individual object candidate boxes to determine if a target object is present in the target image and the target region in which the target object is located.
In some embodiments, the step S1021 may be implemented as: and inputting the target image into a target object identification model, and detecting whether the target object exists in the target image.
In some embodiments, the step S1021 may be implemented as: and inputting the processed target image (namely the target image subjected to ISP operation) into a target object identification model, and detecting whether a target object exists in the target image.
S1022, when the target object exists in the target image, the target object identification model outputs the position information of the target area where the target object is located.
In some embodiments, the target region is the region where the smallest bounding rectangle of the target object is located; or the target region is a region constituted by the contour line of the body edge of the target object. It can be appreciated that the white balance correction method provided in the embodiments of the present application performs white balance correction according to white or gray medical auxiliary tools (such as gauze, bandages, scalpels, etc.) commonly used in the surgical procedure, so as to determine the white balance parameters of the target image. Because most of the cavity tissues in the human body are red, if the target area comprises a too large range (namely comprises not only the target object but also other areas except the target object), the white balance parameters can not be accurately determined according to the target area, so that the embodiment of the application locates the area of the minimum circumscribed rectangular frame of the target object; alternatively, a region formed by the contour lines of the body edges of the target object is taken as the target region.
The location information of the target area where the target object is located may be, for example, coordinates of a minimum circumscribed rectangular frame of the target object. For example, as shown in fig. 6, if the target object is gauze, the target image is input into a gauze recognition model, and two pieces of gauze are detected in the target image, wherein the position information of the target area where the first piece of gauze is located is { (x 1, y 1), (x 2, y 2) }, and the position information of the target area where the second piece of gauze is located is { (x 3, y 3), (x 4, y 4) }.
In some embodiments, in the case where the target object is not present in the target image, the imaging system host returns to step S101 to newly select an image from the image sequence captured by the endoscope in the body as the target image.
S103, determining a target white balance parameter according to pixel information of a target area where the target object is located.
In some embodiments, in the case that the target area is the area where the minimum bounding rectangle of the target object is located, the pixel information of the target area where the target object is located is the pixel information in the area where the minimum bounding rectangle of the target object is located; when the target region is a region formed by the contour lines of the body edges of the target object, the pixel information of the target region in which the target object is located is the pixel information in the region formed by the contour lines of the body edges of the target object.
In some embodiments, the pixel information of the target region includes: gain of color channels of pixels within the target area. The above step S103 may be implemented as: and determining a target white balance parameter according to the gain of the color channel of the pixel point in the target area.
Wherein, the color channel of pixel includes: r channel, G channel, and B channel.
In some embodiments, the gain of the color channel of the pixel point in the target area may be determined according to the following manner:
and a1, determining the saturation of the target pixel point according to the color component value of the color channel of the target pixel point.
Wherein, the color component values of the color channel of the pixel point include: color component values for R channel, G channel and B channel.
In some embodiments, the target pixel is any one of the pixels in the target region of the target image.
In other embodiments, the target image is segmented to obtain a plurality of image blocks, and each image block is regarded as a pixel point, so that the target pixel point is any image block in a target area of the target image. The value of the R channel color component, the value of the G channel color component, and the value of the B channel color component of the target pixel point in the target area are the average value of the R channel color components, the average value of the G channel color component, and the average value of the B channel color component of all the pixel points in the corresponding image block.
As illustrated in fig. 7, the target image is divided into 15×13 image blocks according to the resolution of the target image, and assuming that 9 image blocks are included in the target area and each image block is regarded as one pixel, 9 target pixel points are included in the target area, wherein the value of the color component of the R channel, the value of the color component of the G channel, and the value of the color component of the B channel of the target pixel point 1 are the average value of the color components of the R channel, the average value of the color component of the G channel, and the average value of the color component of the B channel of all the pixel points in the image block 1.
It can be understood that if the target pixel is any one pixel in the target area of the target image, the number of target pixels is greater, and thus the white balance parameter determined according to the target pixel may be more realistic. If the target pixel point is any one image block in the target area of the target image, the value of the three-channel color component of the target pixel point is the average value of the three-channel color components of all the pixel points in the image block, so that the influence of the abnormality of a single pixel point on the calculation of the white balance parameter can be reduced, and the accuracy of the white balance parameter is improved.
In some embodiments, the saturation of the target pixel point is determined from a minimum of the values of the three-channel color components of the target pixel point and an average of the values of the three-channel color components of the target pixel point.
Illustratively, the saturation of the target pixel point may satisfy the following formula (1):
wherein (i, j) represents the pixel coordinates of the target pixel point in the target image, i e [1, row ], j e [1, col ], row is the height of the target image, col is the width of the target image, saturation (i, j) represents the saturation of the target pixel point, MIN is a maximum function, MIN is a minimum function, r (i, j) represents the color component value of the r channel of the target pixel point, g (i, j) represents the color component value of the g channel of the target pixel point, and b (i, j) represents the color component value of the b channel of the target pixel point.
And a2, determining a weight value of the target pixel point according to the saturation of the target pixel point.
The weight value of the target pixel point is used for reflecting the probability that the saturation of the target pixel point occurs in the target area.
In some embodiments, the weight value of the target pixel point is determined from the product of the saturation of the target pixel point and the initial weight value of the target pixel point. The initial weight value of the target pixel point is determined by the color temperature of the target pixel point.
Illustratively, the weight value of the target pixel point satisfies the following formula (2):
weight (i, j) =saturation (i, j) p (i, j) formula (2)
Wherein weight (i, j) represents the weight value of the target pixel point, and p (i, j) represents the initial weight value of the target pixel point.
And a step a3 of determining the gain of the color channel of the target pixel point according to the color component value of the color channel of the target pixel point and the weight value of the target pixel point.
In some embodiments, the gain of the R channel of the pixel in the target region is determined from the weight value of the target pixel (the target pixel is any one of the pixel in the target region), the value of the R channel color component of the target pixel, and the value of the G channel color component of the target pixel. Illustratively, the gain of the R channel of the pixel point in the target area satisfies the following equation (3):
where gain_r represents the gain of the R channel for the pixel point in the target area.
In some embodiments, the gain of the B-channel of the pixel point in the target region is determined from the weight value of the target pixel point, the value of the color component of the B-channel of the target pixel point, and the value of the color component of the G-channel of the target pixel point. Illustratively, the gain of the B channel of the pixel point in the target area may satisfy the following equation (4):
Where gain_b represents the gain of the B-channel of the pixel point in the target area.
In some embodiments, the gain of the color channel of the pixel point in the target area further comprises: gain of G channel of pixel point in target area. As one possible implementation, the gain (gain_g) of the G channel of the pixel point in the target area is constant 1, i.e., gain_g=1.
In some embodiments, the target white balance parameters include: a first gain and a second gain. Wherein the first gain is used to correct the color component of the R channel of the image; the second gain is used to correct the color component of the B-channel of the image.
As a possible implementation manner, the determining the target white balance parameter according to the gain of the color channel of the pixel point in the target area may be implemented: taking the gain of the R channel of the pixel point in the target area as a first gain, namely the first gain can meet the formula (3); the gain of the B channel of the pixel point in the target area may be taken as the second gain, that is, the fourth gain, which satisfies the above-described formula (4).
In other embodiments, the target white balance parameters may further include: a third gain; the third gain is used to correct the color component of the G channel of the image. Alternatively, the third gain may be determined by the gain of the G channel of the pixel point in the target area, i.e., the third gain satisfies gain_g=1.
It can be understood that, in the method for calculating the target white balance parameter provided by the embodiment of the present application, a saturation processing method and a weighted value estimation method are adopted, a probability weight value is given to each target pixel point in the target area, the probability that the target pixel point with a large weight value is a gray or white pixel point is large, otherwise, the probability that the target pixel point with a small weight value is a gray or white pixel point is small. Further, the target white balance parameter is determined according to the weight value of the target pixel point and the value of the color component of the three channels of the target pixel point. Thus, the white balance correction is performed according to the target white balance parameter provided by the embodiment of the application, so that the accuracy of the white balance correction can be improved.
S104, performing white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameters.
In some embodiments, the performing white balance correction on the images in the image sequence captured by the endoscope according to the target white balance parameter includes: performing white balance correction on any one or more images in an image sequence shot by the endoscope according to the target white balance parameters; alternatively, white balance correction is performed on other images in the image sequence captured by the endoscope, which are time-series behind the target image, based on the target white balance parameter.
It will be appreciated that the imaging environments of the images in the image sequence are the same (e.g., the images in the image sequence are all images captured by the endoscope in vivo). Therefore, the target white balance parameter determined according to the three-channel gain of the target image can be used for white balance correction of images in the image sequence captured by the endoscope in vivo.
In some embodiments, the step S104 may be implemented as: the color component value of the R channel of each pixel point in the image to be corrected is multiplied by a first gain, and the color component value of the B channel is multiplied by a second gain, so that white balance correction is performed; at the same time, the color component value of the G channel of each pixel point in the image to be corrected is kept unchanged. The image to be corrected is any one image in an image sequence shot by the endoscope.
Illustratively, the sequence of images that an endoscope takes in vivo is assumed to include: a first image, a second image, a third image, and a fourth image, wherein the target image is the first image; the second image, the third image and the fourth image are images to be corrected; determining a first gain and a second gain as target white balance parameters according to pixel information of a target area of the first image based on the steps S101-S103; and further, the color component value of the R channel of each pixel point in the second image, the third image and the fourth image is multiplied by the first gain, the color component value of the B channel is multiplied by the second gain, so that white balance correction is performed, and the color component value of the G channel is kept unchanged.
In some embodiments, identifying the target object in the target image in the step S102 includes: an undyed object is identified from the target image as a target object.
Wherein, the undyed object is: undyed gauze, bandages, cotton balls, surgical knives, forceps, scissors or needles.
As a possible implementation manner, the determining whether the object is dyed may be performed in the manner of step S103, which is described above, according to a white balance parameter determined according to pixel information of a pixel point in an area where the object is located, and further according to the white balance parameter, determining whether the object is dyed. The white balance parameter determined according to the pixel information of the pixel point in the area where the object is located comprises: the gain of the color component value of the R channel is adjusted and the gain of the color component value of the B channel is adjusted.
For example, whether an object is colored may be determined according to a gain of adjusting a color component value of an R channel. For example, in the case where the gain of adjusting the color component value of the R channel is greater than a preset threshold, it is determined that the object is not dyed.
The preset threshold value can be determined according to the R-channel gain of the endoscope, which is determined by white balance correction based on an in-vitro environment. Illustratively, when the endoscope is aimed at a white object in an external environment, a user adjusts the gain of the image sensor or the three channels displayed by the image to make the displayed image white, and the three channel gain at this time is the white balance parameter. Assuming that the gain of the R channel determined by performing white balance correction based on the in vitro environment is R1, the preset threshold is R1.
It will be appreciated that since in vivo environments, the subject is most likely to be stained red by blood, embodiments of the present application employ adjusting the gain of the color component values of the R channel to determine whether the subject is stained. Since the gain of adjusting the color component value of the R channel is determined by the gain of the R channel of the pixel point in the region where the object is located, if the object is not dyed, i.e., if the object is white or gray, the gain of the R channel is larger if the color component value of the R channel of the pixel point in the region where the object is located is smaller. If the object is colored, that is, if the object is colored red, the color component value of the R channel of the pixel point of the area where the object is located is larger, the gain of the R channel is smaller.
In some embodiments, in the event that the object is stained, the unstained object is identified again from the sequence of images taken by the endoscope as the target object. It can be understood that, by using the method provided by the embodiment of the application, whether the object is dyed or not can be determined according to the white balance parameter, and whether the object is used as the target object or not and whether the white balance parameter is used as the target white balance parameter or not can be determined, so that interference of color on the target object can be eliminated, and the accuracy of white balance correction can be improved.
Based on the technical scheme provided by the embodiment of the application, at least the following beneficial effects can be generated: by acquiring a target image (the target image is an image in an image sequence captured by an endoscope), and identifying a target object (the target object is white or gray in color, for example, the target object may be a medical aid such as gauze, a bandage, a scalpel, or the like) from the target image; and further, determining a target white balance parameter according to pixel information of a target object in the target image, and performing white balance correction on images in an image sequence shot by the endoscope according to the target white balance parameter. It can be understood that, compared to the method of determining the white balance parameter by performing white balance correction by the user based on the external environment in the prior art, in the embodiment of the present application, the endoscope identifies a target object (such as gauze, bandage, scalpel, etc.) with white or gray color in an image captured in vivo, determines the white balance parameter according to the pixel information of the target object, and performs white balance correction; on one hand, the whole correction process is automatically carried out without human intervention, so that the workload of a user is reduced; on the other hand, the problem that the white balance correction is inaccurate due to the change of the environment in the process from outside the body to inside the body (for example, the color cast problem exists in the cavity in the body, the white balance parameters determined by the white balance correction based on the outside environment are not applicable in the body) can be avoided, and the color accuracy of the image shot by the endoscope in the body is effectively improved.
In some embodiments, as shown in fig. 8, the method for training the target object recognition model may be specifically implemented as the following steps:
s201, acquiring a plurality of sample images.
Wherein the plurality of sample images are images of the endoscope taken under different scenes, including gauze. Exemplary, the plurality of sample images include, but are not limited to: an image including gauze taken by an endoscope in a scene of abdominal surgery, an image including gauze taken by an endoscope in a scene of ear-nose-throat surgery, and the like. The plurality of sample images may also include images of stained gauze taken by an endoscope in vivo; and an image of undyed gauze taken by an endoscope in vivo.
S202, calibrating each sample image in the plurality of sample images to obtain a plurality of calibrated sample images.
In some embodiments, calibrating each sample image includes: and calibrating the position of the target object in each sample image. Optionally, calibrating the minimum rectangular frame of the area where the target object is located in each sample image.
Illustratively, taking the target object as a gauze as an example, calibrating the gauze position in each sample image includes: and calibrating the minimum rectangular frame of the area where the gauze is positioned in each sample image. For example, a schematic representation of a sample image calibrated with gauze locations may be shown in fig. 9.
S203, dividing the calibrated sample images into a training set and a testing set.
The training set is used for training the target object recognition model to be trained. The test set is used for testing the trained target object recognition model.
S204, training the target object recognition model to be trained by adopting the training set to obtain a target object recognition model after training.
In some embodiments, as shown in fig. 10, the step S204 may be implemented as the following steps:
sc1, acquiring a target object recognition model to be trained and a training set.
And Sc2, inputting the sample image of the training set into a target object recognition model to be trained, and obtaining a prediction result of the region where the target object is located in the sample image of the training set.
Sc3, determining a loss value of the training set according to a prediction result of an area where the target object is located in the sample image of the training set and a real result of the area where the target object is located in the sample image.
The determining a loss value of the training set according to the prediction result of the region where the target object is located in the sample image of the training set and the real result of the region where the target object is located in the sample image includes: and determining the loss value of the training set according to the predicted result of the area where the target object is in the sample image of the training set and the real result of the area where the target object is in the sample image.
Wherein the loss function generally comprises: positioning loss and classification loss. Positioning loss is used for object positioning, while classification loss is used for object classification.
And Sc4, judging whether the target object recognition model converges or not according to the loss value of the training set.
And Sc5, if not, updating weight parameters in the target object recognition model according to the loss value of the training set, returning to step Sc2, inputting a sample image in the training set into the target object recognition model to be trained, and obtaining a prediction result of the region where the target object is located in the sample image.
And Sc6, if so, determining the current target object recognition model as a trained target object recognition model.
S205, testing the trained target object recognition model by adopting a test set, and determining whether the trained target object recognition model is successfully trained.
In some embodiments, as shown in fig. 11, the step S205 may be implemented as the following steps:
sd1, acquiring a test set and a target object recognition model which is trained by a training set.
Sd2, inputting the sample image of the test set into a target object recognition model which is trained by the training set, and obtaining a prediction result of the region where the target object is located in the sample image of the test set.
Sd3, determining a loss value of the test set according to a prediction result of the area where the target object is located in the sample image of the test set and a real result of the area where the target object is located in the sample image.
Sd4, judging whether the model converges or not according to the loss value of the test set and a preset loss value threshold value.
And when the loss value of the test set is smaller than a preset loss value threshold, judging that the model is convergent.
The preset loss value threshold value can be determined according to the modes of experimental test, analog simulation, expert experience and the like.
Sd5, if not, determining the target object recognition model after training of the training set, wherein the training is not successful, and training is needed to be carried out again according to the training set.
Sd6, if yes, determining the target object recognition model after training by the training set, and training successfully.
As shown in fig. 12, the embodiment of the present application provides a white balance correction apparatus for performing the white balance correction method shown in fig. 2. The white balance correction device 300 includes: an acquisition module 301, an identification module 302, a determination module 303 and a correction module 304.
An acquisition module 301, configured to acquire a target image; the target image is an image in a sequence of images taken by the endoscope.
An identifying module 302, configured to identify a target object in the target image; the target object is a medical aid and the target object itself is white or gray in color.
The determining module 303 is configured to determine a target white balance parameter according to pixel information of a target area where the target object is located.
The correction module 304 is configured to perform white balance correction on images in the image sequence captured by the endoscope according to the target white balance parameter.
In one possible implementation manner, the pixel information includes a gain of a color channel of the pixel, and the determining module 303 is specifically configured to determine the target white balance parameter according to the gain of the color channel of the pixel in the target area.
In another possible implementation, the pixel points in the target area include: a target pixel point; the target pixel point is any pixel point in the target area; the determining module 303 is further configured to determine the saturation of the target pixel according to the color component value of the color channel of the target pixel; determining a weight value of the target pixel point according to the saturation of the target pixel point; the weight value is used for reflecting the probability that the saturation of the target pixel point occurs in the target area; and determining the gain of the color channel of the pixel point in the target area according to the color component value of the color channel of the target pixel point and the weight value of the target pixel point.
In another possible implementation manner, the color channel of the pixel includes: an R channel and a B channel; the target white balance parameters include: a first gain and a second gain; the first gain is used for correcting the color component of the R channel of the image; the second gain is used for correcting the color component of the B channel of the image; the determining module 303 is specifically configured to take, as a first gain, a gain of an R channel of a pixel point in the target area; the gain of the B channel of the pixel point in the target area is taken as a second gain.
In another possible implementation manner, the correction module 304 is specifically configured to multiply a color component value of an R channel of each pixel point in the image to be corrected by a first gain, a color component value of a B channel by a second gain, and correct white balance of the image to be corrected; the image to be corrected is any one image in an image sequence shot by the endoscope.
In another possible implementation manner, the identifying module 302 is specifically configured to identify an undyed object from the target image as the target object.
In another possible implementation manner, the identifying module 302 is specifically configured to perform an ISP operation on the target image to obtain a processed target image; and identifying the target object from the processed target image.
In the case of implementing the functions of the integrated modules in the form of hardware, the embodiments of the present application provide another possible structural schematic diagram of the white balance correction device referred to in the above embodiments. As shown in fig. 13, the white balance correction apparatus 400 includes: a processor 402, a communication interface 403, a bus 404. Optionally, the white balance correction apparatus 400 may further include a memory 401.
The processor 402 may be any logic block, module, and circuitry that implements or performs the various examples described in connection with the present disclosure. The processor 402 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. Processor 402 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
A communication interface 403 for connecting with other devices via a communication network. The communication network may be an ethernet, a radio access network, a wireless local area network (wireless local area networks, WLAN), etc.
The memory 401 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), magnetic disk storage or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
As a possible implementation, the memory 401 may exist separately from the processor 402, and the memory 401 may be connected to the processor 402 by a bus 404, for storing instructions or program codes. The white balance correction apparatus method provided in the embodiment of the present application can be implemented when the processor 402 calls and executes instructions or program codes stored in the memory 401.
In another possible implementation, the memory 401 may also be integrated with the processor 402.
Bus 404, which may be an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The bus 404 may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 13, but not only one bus or one type of bus.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the white balance correction device is divided into different functional modules, so as to perform all or part of the functions described above.
Embodiments of the present application also provide a computer-readable storage medium. All or part of the flow in the above method embodiments may be implemented by computer instructions to instruct related hardware, and the program may be stored in the above computer readable storage medium, and the program may include the flow in the above method embodiments when executed. The computer readable storage medium may be any of the foregoing embodiments or memory. The computer-readable storage medium may be an external storage device of the white balance correction apparatus, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like provided in the white balance correction apparatus. Further, the above-described computer-readable storage medium may further include both the internal storage unit and the external storage device of the above-described white balance correction apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the white balance correction device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The present embodiments also provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform any of the white balance correction methods provided in the above embodiments.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "Comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A white balance correction method, characterized by being applied to an endoscope; the method comprises the following steps:
acquiring a target image; the target image is an image in an image sequence shot by the endoscope;
identifying a target object in the target image; the target object is a medical auxiliary tool, and the color of the target object is white or gray;
determining a target white balance parameter according to pixel information of a target area where the target object is located;
and carrying out white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameter.
2. The method according to claim 1, wherein the pixel information includes a gain of a color channel of a pixel point, and the determining a target white balance parameter according to the pixel information of the target area where the target object is located includes:
And determining the target white balance parameter according to the gain of the color channel of the pixel point in the target area.
3. The method of claim 2, wherein the pixel points in the target area comprise: a target pixel point; the target pixel point is any pixel point in the target area; the method further comprises the steps of:
determining the saturation of the target pixel point according to the color component value of the color channel of the target pixel point;
determining a weight value of the target pixel point according to the saturation of the target pixel point; the weight value is used for reflecting the probability that the saturation of the target pixel point occurs in the target area;
and determining the gain of the color channel of the pixel point in the target area according to the color component value of the color channel of the target pixel point and the weight value of the target pixel point.
4. The method of claim 2, wherein the color channel of the pixel comprises: an R channel and a B channel; the target white balance parameters include: a first gain and a second gain; the first gain is used for correcting color components of an R channel of the image; the second gain is used for correcting the color component of the B channel of the image;
The determining the target white balance parameter according to the gain of the color channel of the pixel point in the target area includes:
taking the gain of the R channel of the pixel point in the target area as the first gain;
and taking the gain of the B channel of the pixel point in the target area as the second gain.
5. The method of claim 4, wherein performing white balance correction on images in the sequence of images captured by the endoscope according to the target white balance parameter comprises:
multiplying the color component value of the R channel of each pixel point in the image to be corrected by the first gain and the color component value of the B channel by the second gain, and performing white balance correction on the image to be corrected; the image to be corrected is any one image in an image sequence shot by the endoscope.
6. The method of claim 1, wherein the identifying the target object in the target image comprises:
and identifying an undyed object from the target image as the target object.
7. The method of claim 1, wherein the identifying the target object in the target image comprises:
Performing Image Signal Processing (ISP) operation on the target image to obtain a processed target image;
and identifying a target object from the processed target image.
8. A white balance correction device, characterized by being applied to an endoscope, comprising:
the acquisition module is used for acquiring a target image; the target image is an image in an image sequence shot by the endoscope;
the identification module is used for identifying a target object in the target image; the target object is a medical auxiliary tool, and the color of the target object is white or gray;
the determining module is used for determining a target white balance parameter according to the pixel information of the target area where the target object is located;
and the correction module is used for carrying out white balance correction on images in the image sequence shot by the endoscope according to the target white balance parameter.
9. A white balance correction device, comprising:
one or more processors;
one or more memories;
wherein the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, perform the white balance correction method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer-executable instructions that, when executed on a computer, cause the computer to perform the white balance correction method of any one of claims 1 to 7.
CN202211040871.7A 2022-08-29 2022-08-29 White balance correction method, device and storage medium Pending CN117689591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211040871.7A CN117689591A (en) 2022-08-29 2022-08-29 White balance correction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211040871.7A CN117689591A (en) 2022-08-29 2022-08-29 White balance correction method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117689591A true CN117689591A (en) 2024-03-12

Family

ID=90135811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211040871.7A Pending CN117689591A (en) 2022-08-29 2022-08-29 White balance correction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117689591A (en)

Similar Documents

Publication Publication Date Title
US11727560B2 (en) Wound imaging and analysis
CN110505459B (en) Image color correction method, device and storage medium suitable for endoscope
US9916666B2 (en) Image processing apparatus for identifying whether or not microstructure in set examination region is abnormal, image processing method, and computer-readable recording device
JP2004326805A (en) Method of detecting and correcting red-eye in digital image
CN103327883A (en) Medical image processing device and medical image processing method
WO2020198315A1 (en) Near-infrared fluorescence imaging for blood flow and perfusion visualization and related systems and computer program products
WO2011121900A1 (en) Image processing apparatus, image reading apparatus, image processing method and image processing program
CN108601509B (en) Image processing apparatus, image processing method, and program-recorded medium
CN110740676A (en) Endoscope system and method for operating endoscope system
WO2016006429A1 (en) Image processing device and method, program, and endoscope system
CN117314872A (en) Intelligent segmentation method and device for retina image
CN117575924A (en) Visible light and near infrared fluorescence image fusion method of unified model
CN117689591A (en) White balance correction method, device and storage medium
CN110910409A (en) Gray scale image processing method and device and computer readable storage medium
JP2021058361A (en) Biological information acquisition device and program
US20220222840A1 (en) Control device, image processing method, and storage medium
CN113038868A (en) Medical image processing system
US20210153721A1 (en) Endoscopic image processing apparatus, endoscopic image processing method, and recording medium recording program
US20230255443A1 (en) Apparatuses, systems, and methods for discounting an object while managing auto-exposure of image frames depicting the object
US20210012886A1 (en) Image processing apparatus, image processing method, and storage medium
US10307209B1 (en) Boundary localization of an internal organ of a subject for providing assistance during surgery
US10055858B2 (en) Colour contrast enhancement of images by non-linear colour mapping
Ferreira et al. Approach for the wound area measurement with mobile devices
WO2017117710A1 (en) Imaging system and method for endoscopy
US20230180999A1 (en) Learning apparatus, learning method, program, trained model, and endoscope system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination