CN114504293B - Endoscope exposure control method and endoscope - Google Patents

Endoscope exposure control method and endoscope Download PDF

Info

Publication number
CN114504293B
CN114504293B CN202210418273.2A CN202210418273A CN114504293B CN 114504293 B CN114504293 B CN 114504293B CN 202210418273 A CN202210418273 A CN 202210418273A CN 114504293 B CN114504293 B CN 114504293B
Authority
CN
China
Prior art keywords
value
brightness
calibration
exposure
endoscope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210418273.2A
Other languages
Chinese (zh)
Other versions
CN114504293A (en
Inventor
杨戴天杙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankon Technologies Co Ltd
Original Assignee
Ankon Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankon Technologies Co Ltd filed Critical Ankon Technologies Co Ltd
Priority to CN202210418273.2A priority Critical patent/CN114504293B/en
Publication of CN114504293A publication Critical patent/CN114504293A/en
Application granted granted Critical
Publication of CN114504293B publication Critical patent/CN114504293B/en
Priority to PCT/CN2023/089554 priority patent/WO2023202673A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • A61B1/0684Endoscope light sources using light emitting diodes [LED]

Abstract

The invention discloses an endoscope exposure control method and an endoscope, wherein the method comprises the following steps: driving n light sources to illuminate with n preset exposure values respectively, acquiring a detection image, and dividing the detection image into m areas; n is more than or equal to 2, and m is more than or equal to 2; analyzing the position state of the endoscope, acquiring the illumination radiation degree of each light source preset in the current position state to each area, and correspondingly generating m calibration brightness equations; wherein the calibration intensity equation comprises a calibration exposure value to be determined; approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value; and driving the n light sources to respectively and correspondingly illuminate according to the calibration exposure value, and acquiring a calibration image. According to the endoscope exposure control method, the endoscope position information is utilized to convert the brightness value into an equation with an exposure value variable, anisotropic exposure is realized, and the conditions of 'negative and positive pictures' and unclear shooting are avoided.

Description

Endoscope exposure control method and endoscope
Technical Field
The invention relates to the field of image processing, in particular to an endoscope exposure control method and an endoscope.
Background
In the general technical field, an endoscope is mostly adopted to detect the digestive system of a patient, in order to take a clear picture in the digestive system to analyze a focus, in the prior art, an image device and a matched lighting device are usually arranged at the end part of the endoscope, the endoscope is placed in the body of the patient to be detected along with the movement of the digestive system, the endoscope is extruded by the inner wall of the digestive tract in the process, the image device and the lighting device are inevitably abutted against the inner wall, so that the phenomena of 'negative and positive images' of local underexposure and local overexposure and the problem of unclear pathological conditions shown in the picture are caused in the shot picture, and meanwhile, because a light source arranged in the lighting device has divergence, the brightness data of the whole image can be influenced by a single light source, so on the premise of considering the divergence characteristic of the light source, the problems of uneven exposure and the poor resolution of the whole image can be avoided, The situation that focus shooting is unclear and the like becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide an endoscope exposure control method, which aims to solve the technical problems that in the prior art, the exposure of a picture shot by an endoscope is uneven, and the detected case situation is unclear.
An object of the present invention is to provide an endoscope.
In order to achieve one of the above objects, an embodiment of the present invention provides an endoscope exposure control method including: driving n light sources to illuminate with n preset exposure values respectively, acquiring a detection image, and dividing the detection image into m areas; n is more than or equal to 2, and m is more than or equal to 2; analyzing the position state of the endoscope, acquiring the illumination radiation degree of each light source preset in the current position state to each area, and correspondingly generating m calibration brightness equations; wherein the calibration intensity equation comprises a calibration exposure value to be determined; approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value; and driving the n light sources to respectively and correspondingly illuminate according to the calibration exposure values, and acquiring calibration images.
As a further improvement of an embodiment of the present invention, the method specifically includes: calculating m detection brightness values corresponding to the areas, and analyzing the detection distance value of the endoscope according to the detection brightness values; wherein the detection distance value is the distance between the actual position represented by the detection image and the endoscope; obtaining preset values according to the detection distance values
Figure 54287DEST_PATH_IMAGE001
Generating m calibration brightness equations correspondingly according to the brightness contribution values; wherein the brightness contribution value characterizes a degree of illumination radiation of each of the light sources to each of the areas.
As a further improvement of an embodiment of the present invention, the preset exposure value is configured to have a linear variation, and satisfies:
Figure 627220DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 938115DEST_PATH_IMAGE003
is the preset exposure value of the jth light source,
Figure 474270DEST_PATH_IMAGE004
is the line correlation coefficient for the jth light source,
Figure 819801DEST_PATH_IMAGE005
is an exposure precision value; the method further comprises the following steps: driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state; acquiring L area calibration images under preset L distance values; l is more than or equal to 2; dividing the L area calibration images into m areas respectively, and analyzing the brightness value of each area to obtain
Figure 799520DEST_PATH_IMAGE006
A luminance contribution value forming a luminance contribution data group; iteratively driving the other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value to obtain n groups of brightness contribution data groups; forming a brightness contribution data table according to the n groups of brightness contribution data groups; the method specifically comprises the following steps: and analyzing to obtain m detection distance values according to the detection brightness value and the brightness contribution data table.
As a further improvement of an embodiment of the present invention, the method further comprises: according to the brightness contribution data table, fitting the m regions respectively to obtain m brightness prediction functions; the brightness prediction function is used for representing the relation between the detection distance and the brightness value of each region, and satisfies the following conditions:
Figure 597712DEST_PATH_IMAGE007
wherein
Figure 203137DEST_PATH_IMAGE008
Figure 403174DEST_PATH_IMAGE009
Figure 52330DEST_PATH_IMAGE010
The distance of the actual position characterized for the ith region from the endoscope,
Figure 337818DEST_PATH_IMAGE011
for the luminance prediction function corresponding to the ith area,
Figure 481354DEST_PATH_IMAGE012
the exposure value of the jth light source in the detection state,
Figure 782953DEST_PATH_IMAGE013
to be at a distance
Figure 743956DEST_PATH_IMAGE010
The brightness contribution value of the lower jth light source to the ith area; the method specifically comprises the following steps: and analyzing to obtain m detection distance values according to the detection brightness value and the inverse function of the brightness prediction function.
As a further improvement of an embodiment of the present invention, the method specifically includes: searching and acquiring the brightness contribution data table corresponding to the distance value closest to the detection distance value
Figure 126527DEST_PATH_IMAGE001
Generating m calibration brightness equations correspondingly according to the brightness contribution values; wherein the calibration luminance equation is:
Figure 932809DEST_PATH_IMAGE014
said
Figure 232072DEST_PATH_IMAGE015
Is the calibration exposure value to be determined.
As a further improvement of an embodiment of the present invention, the calibration luminance equation is:
Figure 98397DEST_PATH_IMAGE016
Wherein, the
Figure 30581DEST_PATH_IMAGE017
For calibration correlation coefficients, the calibration exposure value to be determined and the exposure precision value satisfy:
Figure 843816DEST_PATH_IMAGE018
(ii) a The method specifically comprises the following steps: approximating the calibration brightness equation to a preset target brightness value by using an optimization method, and updating and storing the calibration correlation coefficient; and calculating to obtain the calibration exposure value according to the calibration correlation coefficient and the exposure precision value.
As a further development of an embodiment of the invention, the optimization method is configured according to a formula
Figure 499051DEST_PATH_IMAGE019
Carrying out calibration; wherein, the
Figure 801856DEST_PATH_IMAGE020
Is the target brightness value.
As a further improvement of an embodiment of the present invention, the preset exposure value is configured to have a linear variation, and satisfies:
Figure 955757DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 494054DEST_PATH_IMAGE003
is the preset exposure value of the jth light source,
Figure 377697DEST_PATH_IMAGE004
is the line correlation coefficient for the jth light source,
Figure 461190DEST_PATH_IMAGE005
is an exposure precision value; the method further comprises the following steps: driving the n light sources to respectively illuminate by taking the exposure precision values as preset exposure values; acquiring L pieces of pixel calibration images under preset L distance values; l is more than or equal to 2; randomly selecting at least one calibration pixel in each pixel calibration image; analyzing the brightness value of the calibration pixel, and fitting the relationship between the brightness value and the distance between the actual position represented by the calibration pixel and the endoscope to form a brightness prediction relational expression; the method specifically comprises the following steps: acquiring a reference image, driving n light sources to respectively illuminate with n preset exposure values, Acquiring the detection image corresponding to the reference image, and dividing the detection image into m areas; acquiring a reference distance value between an actual position represented by at least one reference pixel of the calibration pixels and the endoscope in the reference image, and acquiring a standard brightness value of a standard pixel corresponding to the reference pixel in the detection image; calculating to obtain a brightness predicted value according to the reference distance value and the brightness prediction relational expression; calibrating the standard brightness value to the brightness predicted value to obtain an output correction factor; randomly selecting and analyzing the brightness value of at least one pixel in each region, and calibrating the brightness value by using the output correction factor to obtain m detection brightness values.
As a further improvement of an embodiment of the present invention, the number m of the regions is configured to be 9, and the area of the region located in the center of the detection image is larger than the areas of other regions in the detection image; the number n of the light sources is configured to be 4; the preset exposure value and the calibration exposure value are configured as a product of an exposure time and an exposure gain of the endoscope; the method further comprises the following steps: and adjusting at least one of the exposure time and the exposure gain, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure value, and acquiring a calibration image.
In order to achieve one of the above objects, an aspect of the present invention provides an endoscope that carries out exposure control by the endoscope exposure control method according to any one of the above aspects.
Compared with the prior art, the endoscope exposure control method provided by the invention divides the detection image into a plurality of areas after the detection image is acquired through one shooting step, and generating a calibration brightness equation according to the endoscope state condition and the illumination radiation degree of the plurality of areas by the plurality of light sources, approaching the equation to the target brightness value, thereby obtaining a standard exposure value to be applied to the endoscope, converting the brightness value in the detection picture into an equation with a variable of a light source exposure value, further calculating to obtain an exposure value according with a target brightness value, since there is a difference in exposure value calculated by each light source, an anisotropic exposure control method can be realized, under the premise of considering different illumination radiation intensities of different light sources to different areas, the problems of negative and positive image phenomenon, unclear focus shooting and the like are prevented.
Drawings
Fig. 1 is a schematic configuration diagram of an endoscope in an embodiment of the present invention.
FIG. 2 is a schematic diagram showing the structure of an endoscope in one embodiment of the present invention.
Fig. 3 is a schematic diagram of trigger level waveforms of some components in the endoscope according to an embodiment of the present invention.
Fig. 4 is a schematic step diagram of an endoscope exposure control method according to an embodiment of the present invention.
Fig. 5 is a schematic configuration diagram of a detection image generated when the endoscope exposure control method is executed in one embodiment of the present invention.
Fig. 6 is a schematic step diagram of a first example of an endoscope exposure control method according to an embodiment of the present invention.
Fig. 7 is a schematic view of a part of the steps of an endoscope exposure control method according to another embodiment of the present invention.
Fig. 8 is a schematic configuration diagram of an endoscope when executing the endoscope exposure control method according to another embodiment of the present invention.
Fig. 9 is a schematic view of a part of the steps of a first example of an endoscope exposure control method according to another embodiment of the present invention.
Fig. 10 is another partial step diagram of the second example of the endoscope exposure control method according to another embodiment of the present invention.
Fig. 11 is a schematic view of a part of the steps of a third example of the endoscope exposure control method according to another embodiment of the present invention.
Fig. 12 is a partial step diagram showing a specific example of the third embodiment of the endoscope exposure control method according to the another embodiment of the present invention.
Fig. 13 is a schematic view of a part of the steps of an endoscope exposure control method according to still another embodiment of the present invention.
Fig. 14 is a schematic view of another part of the steps of the exposure control method of the endoscope in the further embodiment of the present invention.
Fig. 15 is a schematic step diagram of an endoscope exposure control method according to still another embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
It is to be noted that the term "comprises," "comprising," or any other variation thereof is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," "third," "fourth," "fifth," "sixth," "seventh," "eighth," "ninth," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Apparatuses for detecting a condition of a lesion in a human body and outputting detection data such as image data are generally configured as endoscopes, and particularly, as capsule endoscope devices having a special shape and easily swallowed by a patient. The digestive system in the human body shields the external environment light, and a light source for providing illumination is required to be arranged in the endoscope to acquire a clearly visible detection image, wherein the light source is preferably 4 to 6 LED lamps to provide sufficient illumination for an image sensor. However, if the plurality of light sources are arranged to have the same light intensity, a phenomenon such as "a negative-positive image" may occur, and particularly, when the endoscope moves in a narrow intestinal tract or approaches any inner wall of the digestive system, a phenomenon of uneven exposure of the detection image may become more significant. Therefore, in order to solve the above problem, it is necessary to configure different light sources to be driven at different degrees to form an anisotropic exposure effect and to perform dynamic adaptive adjustment according to the current detected image or state situation of the endoscope.
An endoscope 10 according to an embodiment of the present invention is provided with an endoscope exposure control method for controlling exposure, and includes a housing cover 11, a housing body 12, an imaging unit 111, and at least two light sources 112, as shown in fig. 1. The cover 11 is preferably configured as a light-transmitting material so that the image pickup portion 111 sufficiently transmits and receives detection signals to form a detection image, and so that light emitted from the light source 112 can pass through and provide illumination for the detection signals of the image pickup portion 111. Specifically, the cover 11 may be configured of a smooth transparent material. The case cover 11 is disposed on at least one end of the case body 12, the case cover 11 and the case body 12 are assembled to form at least one cavity, and the camera portion 111 and the light source 112 are disposed on one side of the cavity close to the case cover 11. Preferably, the cover 11 is configured as a hemispherical case and is provided at both ends of the body 12, and the body 12 is configured as a cylinder so that the endoscope has a capsule-shaped appearance that is easy to swallow. The detection signals and the light direction are preferably configured to be at least substantially parallel to each other and to be emitted outwardly in a direction away from the housing 12, so that the light beam and each portion on the detection image have a certain correspondence.
A main control part 13 may be further disposed in the cavity, and the main control part 13 may be respectively connected to the image pickup part 111 and the light source 112 and configured to perform the endoscope exposure control method, so that when the endoscope approaches the inner wall 100 of the digestive tract, exposure degrees of different light sources 112 are adaptively adjusted. For example, the exposure of the light sources 112 can be decreased on the side near the inner wall 100 of the alimentary tract and/or the exposure of the light sources 112 can be increased on the side away from the inner wall 100 of the alimentary tract.
As shown in fig. 1 and 2, the endoscope may further include a main control module 131 and other components in addition to the image portion 111 and the light source 112, and the relevant components of the main control module 131 are used for executing the endoscope exposure control method, and may be defined as belonging to the main control portion 13, or may be provided independently from the main control portion 13. In such an embodiment, the main control section 13 may be used to implement other functions than exposure control, such as image data analysis, quantization parameter calculation, and the like.
The main control module 131 includes an image triggering module 132 and an illumination triggering module 133, and after the endoscope exposure control method is executed, the main control module 131 can selectively control the image portion 111 and the light source 112 to trigger respectively through the image triggering module 132 and the illumination triggering module 133. Preferably, as shown in fig. 1, fig. 2 and fig. 3, in an embodiment, the light source 112 specifically includes a first light source 1121, a second light source 1122, a third light source 1123 and a fourth light source 1124, the triggering duration of the imaging portion 111 may be configured to be longer than the triggering duration of any one of the light sources 112, and the triggering start time of the imaging portion 111 may be configured to precede the triggering start time of any one of the light sources 112, so as to acquire a more complete detection image in many cases.
In addition, the exposure degree of the light source 112 can be realized by adjusting parameters such as exposure time and exposure gain. In this embodiment, it is preferable to achieve the effect of adjusting the exposure by lengthening or shortening the exposure time. For example, the second and fourth light sources 1122 and 1124 near the inner wall 100 of the digestive tract may be configured to have a shorter exposure time, and the first and third light sources 1121 and 1123 far from the inner wall 100 of the digestive tract may be configured to have a longer exposure time. In some embodiments, the exposure gain parameter value can be further influenced by adjusting the exposure intensity of the light source 112, specifically, adjusting the current passing through the light source 112, so as to achieve the effect of adjusting the exposure degree.
Continuing with FIG. 2, the endoscope may further include an image processing module 134, a data processing module 135, a storage module 136, and a communication module 137. The image processing module 134 is configured to receive a detection image and perform area division on the detection image; the data processing module 135 is configured to call the illumination radiation degree to generate a calibration brightness equation correspondingly, approximate the calibration brightness equation to a preset target brightness value, and update and store the calibration exposure value; the storage module 136 is configured to store a preset illumination radiation degree for the data processing module 135 to call, and cache intermediate data in the calculation process; the communication module 137 is used for sending data generated in the exposure control process and related detection images and calibration images to an external device for monitoring and receiving instructions.
The endoscope in one embodiment may further comprise a state analysis module 138 for analyzing the state of the endoscope in the human body, and assisting the data processing module to retrieve the data related to the illumination radiation level, and assist the generation process of the calibration luminance equation. The position state analysis module 138 may be a distance sensor or other device that is independently configured to determine which light source 112 is closer to the inner wall 100 of the digestive tract simply by sending one or more sets of distance detection signals; or may be configured to have a connection relationship with the image processing module 134 so as to directly analyze the detected image to obtain the current endoscope bit state information. Compared with the former embodiment, the latter embodiment has a simpler structural configuration, and does not affect the analysis result of the position state due to foreign matters, mucus, sensor shielding or other interference factors inside the digestive system. Meanwhile, the detection image is also required to be received for monitoring based on exposure adjustment, so that the bit state analysis process and the exposure adjustment process can be configured to be carried out in sequence or simultaneously in one process, the requirement for acquiring data volume is reduced, the output data volume and the accuracy degree are not influenced, and the processing speed can be increased.
An embodiment of the present invention provides an endoscope exposure control method, which may be mounted on the endoscope provided in any of the above-described embodiments, or may be mounted on another apparatus for controlling exposure of the endoscope. As shown in fig. 4, the endoscopic exposure control method may include the following steps.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
And step 32, analyzing the position state of the endoscope, acquiring the illumination radiation degree of each area of each light source preset in the current position state, and correspondingly generating m calibration brightness equations.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
Wherein n is more than or equal to 2, m is more than or equal to 2, and the calibration brightness equation comprises a calibration exposure value to be determined.
After the light source is driven to illuminate, the part used for shooting in the endoscope can obtain a detection image under the current illumination condition, and in the case that the light beam of the light source is emitted outwards along the signal transmission direction of the shooting device, the exposure degree of the light source can affect the brightness of different areas in the detection image, thereby affecting the imaging quality of the detection image. In a default state, the n light sources may illuminate with n identical preset exposure values or with n at least partially different preset exposure values, and after the detection image is divided into m regions, the brightness of the m regions is affected by the illumination radiation degree of a single light source, and correspondingly, the brightness of each region is affected by the comprehensive illumination radiation degree of all the light sources.
For example, as shown in fig. 5, a detection image 1110 is generated at a certain time during the execution of the endoscopic exposure control method, and in one embodiment, the number m of the regions may be configured to be 9, so as to generate a first region Q1, a second region Q2, a third region Q3, a fourth region Q4, a fifth region Q5, a sixth region Q6, a seventh region Q7, an eighth region Q8, and a ninth region Q9. At this time, if the number n of light sources is arranged to be even and is symmetrically disposed about the symmetry axis extending longitudinally on the detection image 1110, each light source affects the brightness of all the regions, and the region on the left side, for example, the fourth region Q4, is more greatly affected by the light source also on the left side, and the region on the right side, for example, the sixth region Q6, is more greatly affected by the light source on the right side. If the number n of light sources is configured to be 4 or more and has a more specific arrangement, for example, at the boundary (intersection) of the first region Q1 and the centrally located fifth region Q5, the boundary of the third region Q3 and the fifth region Q5, the boundary of the seventh region Q7 and the fifth region Q5, and the boundary of the ninth region Q9 and the fifth region Q5, respectively, there will be a more complicated case of the influence of the illumination radiation.
On the one hand, in order to achieve better imaging effect, reduce sensitivity to brightness adjustment, and prevent unnecessary repetitive adjustment, the area of the fifth region Q5 located at the center may be configured to be larger than the areas of the other regions in the detection image 1110. Of course, in the embodiment where the brightness adjustment effect is mainly considered, or the acceptable error range allows the above problem to exist, the equal division may be performed for the above regions as well.
On the other hand, the complex situation of the illumination radiation degree causes the poor imaging effect of the final detection image, which is mainly attributed to the motion state of the endoscope, and for a single light source, the relationship between the distance and an obstacle, such as the inner wall of the digestive system, causes different illumination radiation degrees to different areas, and the illumination radiation degree affects the brightness and the exposure degree of the corresponding area of the detection image, so that the state of the endoscope needs to be analyzed in advance, the distance between the corresponding actual positions of the different areas and the endoscope in the current state needs to be obtained, and the light source needs to be called and controlled to be adjusted to the appropriate illumination radiation degree.
The illumination radiation level is reflected on the light source side and represents the influence of the illumination radiation on each area under the setting of an exposure value, and the illumination radiation of all the light sources on a single area presents one brightness, so that according to the illumination radiation levels of different light sources, a calibration brightness equation corresponding to the area can be generated. The calibration brightness equation corresponds to different areas and is m in total, a dependent variable in the calibration brightness equation is detected brightness to be adjusted in a single area, a known parameter is a parameter or a relational expression for representing the illumination radiation degree, an independent variable is a calibration exposure value which can be provided for corresponding different light sources, and therefore the dependent variable can be infinitely close to a target brightness value by adjusting the independent variable of the calibration brightness equation, the purpose that the brightness of the corresponding area is kept in a proper range is achieved, one or more calibration exposure values are obtained through solving, and the calibration exposure value is used for replacing the preset exposure value through controlling the light source to illuminate, so that the brightness of each area in a calibration image can be expected.
The preset target brightness value may be a determined value, or may be a target brightness range generated after floating a preset allowable error value up and down on the basis of the determined value, and thus, the brightness of each region on the final calibration image may also be configured to be uniform. The bit state of the endoscope may be a detected distance value obtained by analyzing the current exposure condition and the brightness condition of the detected image, or a detected distance value obtained by performing detection signal transmission, echo signal reception, and echo signal analysis by a distance sensor. In the latter embodiment, the distance sensor may be provided with multiple sets at different light sources, so that the endoscope can dynamically follow its position in the digestive system to adjust the exposure value of the light source.
An embodiment of the present invention provides a first example of an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 6.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
Step 321, calculating m detection brightness values corresponding to each region, and analyzing the detection distance value of the endoscope according to the detection brightness values.
Step 322, obtaining the preset value according to the detected distance value
Figure 430283DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
The detection distance value is the distance between the actual position represented by the detection image and the endoscope, and the brightness contribution value represents the illumination radiation degree of each light source to each area.
In order to avoid problems such as inaccurate measurement of a detected distance value due to process errors of the distance sensor and increased computation pressure due to input of a plurality of distance sensors, it is preferable to realize a bit state analysis process of the endoscope by analyzing a detected image.
Each area in the detection image correspondingly has a detection brightness value, and under the condition that the current exposure condition of the endoscope light source is known, the detection brightness value of the corresponding area is only influenced by the distance between the actual position represented by the detection brightness value and the endoscope, namely, only by different detection distance values corresponding to each area. At this time, since the detection brightness value is known and the current exposure condition is known (which may be the preset exposure value), the detection distance values of the endoscope may be obtained by solving, and the number of the detection distance values is preferably equal to the number of the regions.
And because the detection distance value determines the illumination radiation degree of different light sources to different areas, the illumination radiation degree can be quantized according to the detection distance value in a table look-up or fitting function inverse transformation mode, so that the brightness contribution value of a plurality of groups of single light sources to a single area is obtained. Since the number of regions is m, the number of light sources is n, and the number of luminance contribution values is m
Figure 804895DEST_PATH_IMAGE001
. Thus, the dependent variable in the m calibration luminance equations corresponding to the m regions is the detected luminance to be adjusted for a single region, the known parameter is n luminance contribution values corresponding to a single region, and the independent variable is n calibration exposure values to be determined, which may be n light sources.
Another embodiment of the present invention provides an endoscope exposure control method, which is shown in fig. 6 and 7 and specifically includes the following steps.
And step 21, driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state.
And step 22, acquiring L area calibration images under the preset L distance values.
Step 23, dividing the L area calibration images into m areas, and analyzing the brightness values of the areas to obtain
Figure 683989DEST_PATH_IMAGE006
A brightness contribution value to form brightnessThe degree contribution data set.
And 24, iteratively driving other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value, and acquiring n groups of brightness contribution data sets.
And 25, forming a brightness contribution data table according to the n groups of brightness contribution data groups.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
Step 321, calculating m detection brightness values corresponding to each region, and analyzing the detection distance value of the endoscope according to the detection brightness values.
Step 322, obtaining the preset value according to the detected distance value
Figure 328597DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
Wherein the preset exposure value is configured to have a linear variation and satisfies:
Figure 909620DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 665087DEST_PATH_IMAGE003
is the preset exposure value of the jth light source,
Figure 867529DEST_PATH_IMAGE004
is the line correlation coefficient for the jth light source,
Figure 683038DEST_PATH_IMAGE005
Is the exposure precision value. Thus, the line correlation coefficient can be adjusted
Figure 49560DEST_PATH_IMAGE004
Forming multiple shifts of the light source, and adjusting the exposure precision
Figure 484083DEST_PATH_IMAGE005
And the size is used for controlling the global adjustment range and the adjustment precision. Meanwhile, the exposure precision value is used as the initial exposure value of the calibration and detection steps, so that the exposure degree in the adjustment process is increased in a step-like mode. Wherein L is more than or equal to 2; the step 321 is specifically configured to: and analyzing m detection distance values according to the detection brightness value and the brightness contribution data table.
Steps 21 to 25 disclose a calibration process of the endoscope before use, which can be performed in a state that the selected detection image in the human body is uniformly exposed, or can be performed after external pre-calibration. Of course, in some embodiments, the former may be defined as being included in the course of the endoscope being used, but the implementation of the present invention is not affected by any of the above-described definitions.
For embodiments of in vitro calibration, as shown in fig. 8, it is preferred to use a diffusely scattering plate 14 as the reference, the diffusely scattering plate 14 further preferably being configured as a pink latex or other analog material with a reflectance approximating that of the surface of the alimentary tract mucosa. During the test, it is preferable to set the endoscope 10 so as to perform calibration while keeping the longitudinal extension direction thereof perpendicular to the diffusely scattering plate 14, or to set the endoscope 10 so as to perform calibration while keeping the average detection signal of the imaging section 111 and the average light flux of the light source 112 perpendicular to the diffusely scattering plate 14. Considering light sources driven one by one and according to exposure accuracy
Figure 931245DEST_PATH_IMAGE005
Illumination affects the brightness of the respective area, for the light source
Figure 776710DEST_PATH_IMAGE021
At a certain detection distance value Z
Figure 207692DEST_PATH_IMAGE022
The average brightness of a region can be defined as
Figure 977065DEST_PATH_IMAGE023
The average brightness
Figure 435990DEST_PATH_IMAGE024
Can be used to characterize light sources
Figure 327723DEST_PATH_IMAGE025
For the region under the detection distance value Z
Figure 121366DEST_PATH_IMAGE026
The luminance contribution value of.
It should be noted that the detected distance value Z according to the present invention is preferably defined as a distance between the casing of the endoscope 10 and the diffuse scattering plate 14 and other tissues in the digestive system, and naturally, since the detected image is formed on the imaging unit 111 side, the detected distance value Z generally includes a distance difference between the casing provided outside the imaging unit 111 and the imaging unit 111, as compared with the actual imaging distance value Z'. Therefore, in an embodiment of obtaining the detection distance value Z by analyzing the detection image, the method may further include: and analyzing according to the detection brightness value to obtain an imaging distance value Z ', and calculating according to the imaging distance value Z' and the difference value of the distance between the camera part and the endoscope shell to obtain a detection distance value Z.
The dividing method of the region calibration image in step 23 is preferably consistent with the dividing method in step 31, so that the brightness value of a certain region can be adjusted to be optimal. The brightness contribution data set may be pre-stored in the form of a table or a data matrix, or may be in the form of a brightness variation curve generated by fitting for a single light source. For the former embodiment, a single light source is used for each brightness contribution data set, in the average brightness expression
Figure 22326DEST_PATH_IMAGE023
In (1)j is constant, the detection distance value Z is adjusted according to a preset step length, the total number of the detection distance values Z can be L, the number of the detection distance values Z is equal to L, the number of the detection distance values Z can be L, the number of the detection distance values Z corresponds to a certain area in the pointing area calibration image, therefore, the brightness contribution data group can be a table or a data matrix with m rows and L columns, and the longitudinal header is a designated area
Figure 568714DEST_PATH_IMAGE027
With transverse gauge heads at varying distances
Figure 631348DEST_PATH_IMAGE028
For the first area
Figure 646709DEST_PATH_IMAGE026
At a first distance
Figure 351359DEST_PATH_IMAGE029
The luminance contribution resulting from the illumination by the first light source alone may be expressed as
Figure 316035DEST_PATH_IMAGE030
In extension, for any ith zone
Figure 690516DEST_PATH_IMAGE026
At the first distance
Figure 317807DEST_PATH_IMAGE031
The luminance contribution value formed by the illumination of the next j-th light source only can be expressed as
Figure 685203DEST_PATH_IMAGE032
. In a preferred embodiment, the step size of the variable detection distance value Z can be selected from 2 to 10mm, and the overall range can be controlled from 0 to 30 mm.
After further iteration and performing the operations of step 21 to step 23 on the other (n-1) light sources, n groups of brightness contribution data sets having the above form are obtained in total to form the brightness contribution data table together. According to the step 321, the detected brightness value of each region is known, and the corresponding detected brightness value of each region is determinedThe range is unknown but is uniformly fixed for all light sources. For example, the known i-th region
Figure 815970DEST_PATH_IMAGE026
At unknown first distance
Figure 361352DEST_PATH_IMAGE033
Lower with known detected brightness value
Figure 475938DEST_PATH_IMAGE034
Detecting the brightness value
Figure 679649DEST_PATH_IMAGE034
Equal to the brightness contribution values corresponding to all the n light sources respectively
Figure 71447DEST_PATH_IMAGE035
The cumulative sum of (1), the luminance contribution value
Figure 912364DEST_PATH_IMAGE035
Respectively recorded on the luminance contribution data sets corresponding to the n light sources, and specifically located in the i-th area of the known longitudinal header
Figure 638881DEST_PATH_IMAGE026
And the first distance of an unknown but uniform transverse header
Figure 223446DEST_PATH_IMAGE033
To (3). Further, such a luminance contribution value
Figure 469750DEST_PATH_IMAGE035
N in total, i.e. detecting brightness values
Figure 215989DEST_PATH_IMAGE034
At least one of the following conditions may be satisfied:
Figure 665688DEST_PATH_IMAGE036
in the formula, the brightness value is detected
Figure 53944DEST_PATH_IMAGE034
As is known, the ith area has a series of luminance contribution data varying with the detected distance value, and naturally, the detected distance value can be obtained by solving in a table lookup accumulation calculation or fitting an inverse function manner. At this time, the detection distance value corresponds to the luminance contribution value of each light source based on the determination of the detection distance values corresponding to the different regions
Figure 154755DEST_PATH_IMAGE035
Can also be determined so that the luminance contribution value can be utilized
Figure 258846DEST_PATH_IMAGE035
And fitting the calibration brightness equation.
It should be noted that, although the preset exposure value in step 31 is assumed as the exposure precision value in the above-mentioned technical solution, it can be understood that, in the embodiment having a linear relationship between the preset exposure value and the exposure precision value, even if the preset exposure value does not adopt the exposure precision value in the detection process, the brightness contribution data corresponding to the actual preset exposure value in the detection process can be found and calculated based on the same linear relationship.
Of course, for the calculation scheme of the detected distance value, in addition to the above technical scheme, the distance value may be obtained by simply using a VCSEL (Vertical-Cavity Surface-Emitting Laser) or ToF (Time of Flight) distance measurement chip. It is also possible to employ: the method comprises the steps of arranging a calibration point on a shell cover 11, fitting the distance between a pixel point corresponding to the calibration point and an image center acquired in a calibration process to form a functional relation through the angle of the calibration point relative to an optical axis of a camera shooting part, calculating the brightness of the pixel point and fitting the depth distance to form the functional relation, calculating an angle through an angle-distance function in a measurement process, calculating the depth through a brightness-depth formula, and obtaining the detection distance value. Of course, the calculation of the detection distance value can also be realized by only adopting the brightness-depth formula and combining the two-dimensional coordinate relationship of the pixel points.
Another embodiment of the present invention provides a first example of an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 6 and 9.
And step 21, driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state.
And step 22, acquiring L area calibration images under the preset L distance values.
Step 23, dividing the L area calibration images into m areas, and analyzing the brightness values of the areas to obtain
Figure 835321DEST_PATH_IMAGE006
The luminance contribution values form a luminance contribution data set.
And 24, iteratively driving other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value, and acquiring n groups of brightness contribution data sets.
And 25, forming a brightness contribution data table according to the n groups of brightness contribution data groups.
And step 26, respectively fitting the m regions according to the brightness contribution data table to obtain m brightness prediction functions.
And 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
Step 321, calculating m detection brightness values corresponding to each region, and analyzing the detection distance value of the endoscope according to the detection brightness values.
Step 322, obtaining the preset value according to the detected distance value
Figure 637055DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate according to the calibration exposure values, and acquiring calibration images.
The brightness prediction function is used for representing the relation between the detection distance and the brightness value of each region, and satisfies the following conditions:
Figure 451427DEST_PATH_IMAGE007
wherein
Figure 685007DEST_PATH_IMAGE008
Figure 483199DEST_PATH_IMAGE009
Figure 885361DEST_PATH_IMAGE010
The distance of the actual position characterized for the ith region from the endoscope,
Figure 944453DEST_PATH_IMAGE011
for the luminance prediction function corresponding to the i-th region,
Figure 734554DEST_PATH_IMAGE012
the exposure value of the jth light source in the detection state,
Figure 629829DEST_PATH_IMAGE013
to be at a distance
Figure 320836DEST_PATH_IMAGE010
The luminance contribution value of the next j light source to the i area. The step 321 specifically includes: and analyzing to obtain m detection distance values according to the detection brightness value and the inverse function of the brightness prediction function.
The above process is actually to reconstruct the quantitative relationship between the detection distance and the region brightness value in the detection process by using the exposure value in the detection state and the data generated in the calibration process, and the preset exposure value is configured to have a linear change and satisfy:
Figure 375380DEST_PATH_IMAGE037
in an embodiment of the present invention, the luminance prediction function may be further expressed as:
Figure 211748DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 453374DEST_PATH_IMAGE039
Figure 649869DEST_PATH_IMAGE012
for the actual exposure value of the jth light source during the inspection process,
Figure 496602DEST_PATH_IMAGE040
for the actual line correlation coefficient of the jth light source during detection,
Figure 97348DEST_PATH_IMAGE005
is the exposure precision value. The actual exposure value and the actual line correlation coefficient may be differentiated from the preset exposure value and the corresponding line correlation coefficient by continuously adjusting during the detection process, and thus are represented by different symbols. Therefore, the corresponding relation can be established by taking the line correlation coefficient as a medium, and the effects of brightness prediction and reverse adjustment of the light source gear are achieved.
Compared with the table look-up method provided in the foregoing, the fitting luminance prediction function can quickly calculate the detection distance value without traversing the data in each luminance contribution data group. Notably, the first distance
Figure 983527DEST_PATH_IMAGE033
And distance as referred to herein
Figure 593499DEST_PATH_IMAGE041
Both represent the detection distance of the i-th area, but the former may specifically represent the horizontal header of the luminance contribution data set, which is the L-th distance limited to a discrete range of 1 to L, and the latterSince the function fitting is performed, the i-th distance in the continuous range of 1 to L can be represented, and therefore, no number is assigned to the i-th distance with respect to the 1 to L distance.
Based on the contribution of the jth light source to the ith area, or the relation between the average brightness generated when the ith area is only illuminated by the jth light source and the detection distance value Z, the method satisfies
Figure 498002DEST_PATH_IMAGE023
Therefore, the inverse function of the luminance prediction function for the i-th region may be further expressed as
Figure 535228DEST_PATH_IMAGE042
Based on the detection distance value, the detection distance value corresponding to the ith area can be obtained through solving
Figure 203975DEST_PATH_IMAGE041
Of course, the above definition of the detected brightness value blurs the i-th region to have one detected brightness value. On one hand, the detected brightness value may be obtained by collecting brightness values of a plurality of points on the ith area and calculating an average value, or may be obtained by collecting brightness values of pixels located at a central position on the ith area and using the brightness values as an overall detected brightness value of the area; on the other hand, the foregoing technical solution may also be implemented to calculate the detection distance value of a single pixel, for example, a plane coordinate system is established by using one vertex of the ith area or the detection image as the origin of coordinates and using two adjacent edges of the vertex as coordinate axes, so as to give coordinates to the internal pixel points
Figure 493005DEST_PATH_IMAGE043
Thereby by the following formula:
Figure 376648DEST_PATH_IMAGE044
the corresponding coordinate is obtained by calculation
Figure 476453DEST_PATH_IMAGE043
Is formed by a plurality of pixelsDetection distance value of point
Figure 179967DEST_PATH_IMAGE045
Another embodiment of the present invention provides a second example of an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 7 and 10.
And step 21, driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state.
And step 22, acquiring L area calibration images under the preset L distance values.
Step 23, dividing the L area calibration images into m areas, and analyzing the brightness values of the areas to obtain
Figure 7108DEST_PATH_IMAGE006
The luminance contribution values form a luminance contribution data set.
And 24, iteratively driving other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value, and acquiring n groups of brightness contribution data sets.
And 25, forming a brightness contribution data table according to the n groups of brightness contribution data groups.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
In step 321', m detection brightness values corresponding to the respective regions are obtained through calculation, and m detection distance values are obtained through analysis according to the detection brightness values and the brightness contribution data table. Wherein, the step 321' may further include:
3211', according to the brightness contribution data table, fitting the m regions respectively to obtain m brightness prediction functions;
and 3212', analyzing to obtain m detection distance values according to the inverse function of the detection brightness value and the brightness predicted value.
Step 322, obtaining the preset value according to the detected distance value
Figure 932208DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
In the first embodiment of the present invention, m brightness prediction function fitting processes are implemented by adding some steps in the calibration stage, but it can be obtained from the second embodiment that the fitting process of the brightness prediction function can also be set in the detection stage, so that the computation amount of the calibration process can be reduced, especially in the embodiment configured to obtain the detected distance value by using comprehensive means, the endoscope can be set to preferentially use a distance sensor or other devices to detect the distance in the first state, and after it is determined that the distance sensor cannot obtain the normal detected distance value, the brightness prediction function is individually fitted to all or a part of the area, and the detected distance value is calculated by using the brightness prediction function, so as to form a redundant configuration of the distance detection algorithm.
Another embodiment of the present invention provides a third example of an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 7 and 11.
And step 21, driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state.
And step 22, acquiring L area calibration images under the preset L distance values.
Step 23, dividing the L area calibration images into m areas, and analyzing the brightness values of the areas to obtain
Figure 311237DEST_PATH_IMAGE006
The luminance contribution values form a luminance contribution data set.
And 24, iteratively driving other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value, and acquiring n groups of brightness contribution data sets.
And 25, forming a brightness contribution data table according to the n groups of brightness contribution data groups.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
In step 321', m detection brightness values corresponding to the respective regions are obtained through calculation, and m detection distance values are obtained through analysis according to the detection brightness values and the brightness contribution data table.
Step 322', find and obtain the brightness contribution data table corresponding to the distance value closest to the detected distance value
Figure 642992DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
Wherein the calibration luminance equation is:
Figure 398458DEST_PATH_IMAGE014
said
Figure 617212DEST_PATH_IMAGE015
Is the calibration exposure value to be determined. It can be seen that the detected distance value in the i-th area
Figure 432722DEST_PATH_IMAGE041
Under the premise of determination, the detected brightness to be adjusted of the ith area represented by the calibration brightness equation is a dependent variable, the brightness contribution value is abstracted into a brightness contribution data set (table) or is fitted to generate a brightness prediction function for constructing the operational relationship of the calibration brightness equation, and the calibration exposure value to be determined
Figure 48511DEST_PATH_IMAGE015
The calibration exposure value is a unique independent variable, and because the target brightness value is preset, the calibration exposure value which enables the detection brightness to be adjusted to be infinitely close to the target brightness value can be calculated by utilizing a calibration brightness equation.
Since step 321 and its derivation steps provide two embodiments of the luminance contribution data table and the fitting luminance prediction function. The brightness prediction function is generated according to the existing brightness contribution data, so that the brightness prediction function has certain prediction capability and can cover the detection distance value beyond the calibration range. However, for the table look-up method of the luminance contribution data table, when the detected distance value exceeding the calibration range is faced, under the working condition of low precision requirement, the approximate estimation can be performed by approximately equating the part exceeding the calibration range to the detected distance value within the calibration range, so as to achieve the effect of not influencing the subsequent generation process of the calibration luminance equation.
In particular, the detected distance value in the calibration phase
Figure 466723DEST_PATH_IMAGE046
Always satisfy
Figure 913885DEST_PATH_IMAGE047
In the detection stage, when the detection distance value of the i-th area
Figure 775661DEST_PATH_IMAGE041
Greater than the maximum calibrated detection distance value
Figure 206643DEST_PATH_IMAGE048
Then, the distance value can be detected at the maximum calibration
Figure 726748DEST_PATH_IMAGE048
Replacing the actually detected detection distance value
Figure 169362DEST_PATH_IMAGE041
And finishing the inquiry and calculation process of the brightness contribution value. Similarly, when the i-th zoneDetection distance value of domain
Figure 61094DEST_PATH_IMAGE041
Less than the minimum nominal detection distance value
Figure 104006DEST_PATH_IMAGE029
Then, the detected distance value can be calibrated at the minimum
Figure 739386DEST_PATH_IMAGE029
Replacing the actually detected detection distance value
Figure 302086DEST_PATH_IMAGE041
And finishing the inquiry and calculation process of the subsequent brightness contribution value. In addition, when facing the detection distance value
Figure 630299DEST_PATH_IMAGE041
Satisfy the requirement of
Figure 193130DEST_PATH_IMAGE049
In the case of (3), the weighting calculation may be performed according to the offset, or an average value of the luminance contribution values corresponding to the detection distance values in the adjacent calibration stages may be simply calculated and used as the luminance contribution value corresponding to the detection distance value, or of course, one of the luminance contribution values corresponding to the detection distance values in the adjacent calibration stages may be selected and used as the luminance contribution value.
Another embodiment of the present invention provides a specific example of the third embodiment of the endoscope exposure control method, and as shown in fig. 7 and 12, the method may specifically include the following steps.
And step 21, driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state.
And step 22, acquiring L area calibration images under the preset L distance values.
Step 23, dividing the L area calibration images into m areas, and analyzing the brightness values of the areas to obtain
Figure 773147DEST_PATH_IMAGE006
The luminance contribution values form a luminance contribution data set.
And 24, iteratively driving other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value, and acquiring n groups of brightness contribution data sets.
And 25, forming a brightness contribution data table according to the n groups of brightness contribution data groups.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
Step 321', m detection brightness values corresponding to the regions are obtained through calculation, and m detection distance values are obtained through analysis according to the detection brightness values and the brightness contribution data table.
Step 322', find and obtain the brightness contribution data table corresponding to the distance value closest to the detected distance value
Figure 314986DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 331', approximating the calibration brightness equation to a preset target brightness value by using an optimization method, and updating and storing the calibration correlation coefficient.
And step 332', calculating to obtain a calibration exposure value according to the calibration correlation coefficient and the exposure precision value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
Wherein the calibration luminance equation is:
Figure 938735DEST_PATH_IMAGE016
wherein, the
Figure 300446DEST_PATH_IMAGE017
To calibrate the correlation coefficient, the calibration correlation coefficient
Figure 152995DEST_PATH_IMAGE017
The calibration exposure value to be determined
Figure 814921DEST_PATH_IMAGE015
And said exposure accuracy value
Figure 111035DEST_PATH_IMAGE005
Satisfies the following conditions:
Figure 897726DEST_PATH_IMAGE018
the third embodiment shown in fig. 7 and 11 provides a calibration luminance equation for which the calibration exposure value is to be determined
Figure 678600DEST_PATH_IMAGE015
Calculated as a whole, but at a light source configured to be at an exposure precision value
Figure 319666DEST_PATH_IMAGE005
In the embodiment that the exposure value is set to realize the increase of multiple gear steps as the basic exposure value, the calibration exposure value can be directly calculated by applying the technical scheme provided by the above
Figure 160583DEST_PATH_IMAGE015
Corresponding calibration correlation coefficient
Figure 637832DEST_PATH_IMAGE017
And to calibrate the correlation coefficient
Figure 956817DEST_PATH_IMAGE017
The gear of the corresponding light source is adjusted, and the effects of improving the adjustment efficiency and simplifying the algorithm steps are achieved.
Steps 331 'to 332' provide a specific embodiment for the step 33, that is, the process of approximating the calibration luminance equation to the preset target luminance value is set as an optimization problem, and the target luminance value is used as the optimal value, and the calibration exposure value is adjusted
Figure 965573DEST_PATH_IMAGE015
Or calibrating the correlation coefficient
Figure 649495DEST_PATH_IMAGE017
And infinitely approaching the detected brightness to be adjusted represented by the calibration brightness equation to the target brightness value. The optimization method may include a gradient descent method (which may be a batch gradient descent method or a random gradient descent method), a newton method, a quasi-newton method, a conjugate gradient method, a heuristic optimization method, a lagrangian multiplier method, and the like, wherein the heuristic optimization method may be specifically a simulated annealing method, a Genetic Algorithm, an ant colony Algorithm or a particle swarm Algorithm, and may also be a Multi-Objective optimization Algorithm such as NSGAII (Non-Dominated Sorting Genetic Algorithm-II with elite strategy), MOEA/D (Multi-Objective evolution Algorithm Based on Decomposition) and an artificial immune Algorithm. The means of adjusting the approximation may be: and solving the difference between the brightness value to be adjusted and the target brightness value represented by the calibration brightness equation, and enabling the difference to be minimum. Of course, other approximation calculation methods may be used.
It will be appreciated that steps 331 'and 332' are detailed implementations based on step 33 and its derivation, with the purpose of providing a specific approach and calibrating exposure values
Figure 473095DEST_PATH_IMAGE015
It can be seen that there is no necessary implication with the process of generating the calibrated luminance equation of step 322 ', step 322' can be combined with any of the steps 33 described above, and steps 331 'to 332' can also be combined with any of the steps 32 described above. In summary, step 322 ' and steps 331 ' to 332 ' can be split into two separate embodiments, or combined with other steps in other embodiments to generate derived embodiments, or can be arranged in the same embodiment as described above, so that the embodiment has two specific steps at the same timeSpecial technical effect.
Further, in a refined embodiment, the optimization method may be further specifically configured to: according to the formula
Figure 985985DEST_PATH_IMAGE050
Carrying out calibration; wherein, the
Figure 211430DEST_PATH_IMAGE020
Is the target brightness value.
The target brightness value
Figure 269516DEST_PATH_IMAGE020
May be specifically set to have a brightness value in the range of 90-120 (candela/m) to give the area a better visual effect. It is to be understood that the present embodiment provides a target luminance value
Figure 580411DEST_PATH_IMAGE020
The adjustment process is not limited to a fixed value, and may be a range of a brightness interval with a better visual effect as described above, and may be implemented as a technical solution for setting an allowable error range in the solution process of the optimization problem, so that the adjustment process is not required to be accurate to a specific target brightness value
Figure 132878DEST_PATH_IMAGE020
Generally, a better display effect tends to be obtained.
For the above-mentioned optimized calibration formula, the purpose of squaring is to eliminate negative values that may be generated by the difference operation and amplify the difference, and this part may be replaced by operations such as calculating an absolute value. Calibrated luminance equation
Figure 212829DEST_PATH_IMAGE016
And target brightness value
Figure 707395DEST_PATH_IMAGE020
The difference is calculated and the square is obtained after the step, and the illumination influence of a plurality of light sources is integratedThe optimized state data of the luminance of the i-th area relative to the target luminance value can be obtained according to the obtained calibration exposure value
Figure 426959DEST_PATH_IMAGE015
Or calibrating the correlation coefficient
Figure 891438DEST_PATH_IMAGE017
As a final output, i.e. the optimization method is adjusted according to the formula
Figure 966841DEST_PATH_IMAGE051
The effect of optimizing the brightness of the individual regions is achieved. Of course, considering that a plurality of regions need to satisfy the requirement of approaching the target brightness value or being within the target brightness range simultaneously, all regions can be accumulated and considered to realize the optimization of all regions, that is, as mentioned above, according to the formula
Figure 756943DEST_PATH_IMAGE052
To complete the calibration of the exposure value
Figure 402950DEST_PATH_IMAGE015
Or calibrating the correlation coefficient
Figure 671120DEST_PATH_IMAGE017
And (4) calculating.
Still another embodiment of the present invention provides an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 13 and 14.
And step 21', driving the n light sources to respectively illuminate by taking the exposure precision values as preset exposure values.
And step 22', acquiring L pieces of pixel calibration images under preset L distance values.
And 23', randomly selecting at least one calibration pixel in each pixel calibration image.
And 24', analyzing the brightness value of the calibration pixel, and fitting the relationship between the actual position represented by the calibration pixel and the distance between the endoscopes to form a brightness prediction relational expression.
And step 31', acquiring a reference image, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image corresponding to the reference image, and dividing the detection image into m areas.
Step 3201, a reference distance value between the endoscope and an actual position represented by at least one reference pixel corresponding to the calibration pixel is obtained in the reference image, and a standard brightness value of a standard pixel corresponding to the reference pixel is obtained in the detection image.
Step 3202, a brightness predicted value is calculated according to the reference distance value and the brightness prediction relational expression.
Step 3203, the standard brightness value is calibrated to a predicted brightness value to obtain an output correction factor.
Step 321', randomly selecting and analyzing the brightness value of at least one pixel in each region, calibrating the brightness value by using the output calibration factor to obtain m detection brightness values, and analyzing the detection distance value of the endoscope according to the detection brightness values.
Step 322, obtaining the preset value according to the detected distance value
Figure 601030DEST_PATH_IMAGE001
And generating m calibration brightness equations correspondingly according to the brightness contribution values.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure values, and acquiring calibration images.
Wherein the preset exposure value is configured to have a linear variation and satisfies:
Figure 296454DEST_PATH_IMAGE002
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 193872DEST_PATH_IMAGE003
for the preset exposure value of the jth light source,
Figure 937837DEST_PATH_IMAGE004
is the line correlation coefficient for the jth light source,
Figure 846887DEST_PATH_IMAGE005
is an exposure precision value; l is more than or equal to 2.
Steps 21 ' to 24 ' and 31 ' to 3203 disclose a means of calibration and detection, which, compared to the above-mentioned solutions, the technical proposal drives all light sources to illuminate simultaneously, only takes L distance values as independent variables, acquires the pixel brightness of at least one point to perform 'brightness-distance relation' fitting once, acquires a reference image and a detection image by continuous shooting, selecting a reference pixel corresponding to the calibration pixel from the reference image, extracting a reference distance value of the reference pixel, substituting the reference distance value into the brightness-distance relation to perform reverse estimation to obtain a brightness predicted value under an ideal state, and accordingly using the ideal brightness value and the brightness value under the light source driving environment in the detection image, and generating a correction factor corresponding to the standard brightness value actually measured at the position of the reference pixel so as to correct the brightness of the pixel at other positions in the detection image.
Therefore, the problem that certain prediction errors exist in the calculated result after data are directly substituted into a brightness prediction relational expression representing a brightness-distance relation due to the fact that the diffuse reflection flat plate 14 adopted in a calibration state is inconsistent with an actual detection environment can be solved. It can be understood that the generation of the correction factor is a dynamic value generated in real time along with the detection process, and therefore has a better calibration effect, and certainly, in some embodiments with lower requirements on accuracy or more uniform detection environments, the correction factor can be locked by one-time continuous shooting, so that in the subsequent detection process, the generation of a reference image before the detection image is not required to be repeated, and the subsequent detection image can be directly calibrated by the locked correction factor.
It should be noted that, in the pixel calibration image and the reference image, there is no limitation to only one calibration pixel or reference pixel, and of course, there may be a plurality of calibration pixels, and preferably, there may be one calibration pixel or reference pixel in each of the m regions. The calibration pixel and the reference pixel may be defined as a pixel that is located in the pixel calibration image and the reference image and is sufficient to know an actual distance, where the actual distance may be measured by a Laser ranging sensor such as a VCSEL (Vertical-Cavity Surface-Emitting Laser), or may be measured by a ToF (Time of Flight) sensor. Taking the former as an example, when the pixel calibration image and the reference image are collected, the laser ranging sensor is triggered and forms a light spot at the detected position, so that the positions corresponding to the light spot in the pixel calibration image and the reference image can be respectively defined as a calibration pixel and a reference pixel, thereby completing the process; and when the detection image is collected, the laser ranging sensor is not triggered to ensure the purity of the detection image, so that the actual brightness (namely the standard brightness value) corresponding to the reference pixel in each shooting is conveniently obtained, and the calibration factor is generated by utilizing the steps, so that the calibration of other pixels in the detection image is realized.
In particular, the defined calibration pixels have calibration coordinates
Figure 604890DEST_PATH_IMAGE053
At L distance values
Figure 333811DEST_PATH_IMAGE054
Then, the calibration pixel will have the brightness value corresponding to L calibration pixels
Figure 84729DEST_PATH_IMAGE055
Therefore, the relationship between the brightness value of the calibration pixel and the distance value can be solved by fitting a curve or the like. That is, the luminance prediction relation may be defined as:
Figure 35237DEST_PATH_IMAGE056
assuming that the reference pixel comprises a first reference pixel, defining the first reference pixel as having a first parameterExamination coordinate
Figure 72463DEST_PATH_IMAGE057
When the endoscope is in one position state, the distance between the characterized actual position and the endoscope has a first actual distance value
Figure 695205DEST_PATH_IMAGE058
Directly substitute and substitute the luminance prediction relational expression (or solve the inverse function of the luminance prediction relational expression)
Figure 108869DEST_PATH_IMAGE059
Post-replacement substitution) to obtain a corresponding first brightness predicted value
Figure 353031DEST_PATH_IMAGE060
Defining the actual brightness of the first reference pixel as the first standard brightness value
Figure 561158DEST_PATH_IMAGE061
Thus, the output correction factor of the first reference pixel can be solved. Defining the output correction factor as a first correction factor
Figure 405618DEST_PATH_IMAGE062
Specifically, it may satisfy the following formula:
Figure 544344DEST_PATH_IMAGE063
of course, in one embodiment, a single reference image may include two reference pixels, corresponding to a hardware level, and two laser ranging sensors may be provided in the endoscope. The operation process in the process of generating another calibration pixel by another laser sensor is omitted, and the second reference pixel is defined to have the second reference coordinate
Figure 282493DEST_PATH_IMAGE064
When the endoscope is in the same position, it is displayedThe distance between the real position and the endoscope has a second real distance value
Figure 536888DEST_PATH_IMAGE065
The luminance prediction relation or the inverse function thereof is also substituted to obtain a corresponding second luminance prediction value
Figure 993277DEST_PATH_IMAGE066
Defining the actual brightness value of the second reference pixel as the second standard brightness value
Figure 374842DEST_PATH_IMAGE067
Thus, a second correction factor can be solved
Figure 233076DEST_PATH_IMAGE068
Specifically, the following formula can be satisfied:
Figure 455110DEST_PATH_IMAGE069
defining the brightness value of at least one pixel selected in step 321 ″) as
Figure 257850DEST_PATH_IMAGE070
Then a correction factor is output
Figure 817007DEST_PATH_IMAGE071
And the detected distance value
Figure 405115DEST_PATH_IMAGE072
Can be configured to at least satisfy:
Figure 391525DEST_PATH_IMAGE073
Figure 448605DEST_PATH_IMAGE074
of course, the foregoing embodiments only provide a technical solution for obtaining the final output correction factor by averaging a plurality of calibration pixels, reference pixels and laser ranging sensors. Those skilled in the art can understand that, certainly, a weighted average method may be adopted to set the calibration pixels and the reference pixels at different positions to have different weights, so as to improve the accuracy of the correction factor, which is not described herein again.
Still another embodiment of the present invention provides an endoscope exposure control method, which may specifically include the following steps, as shown in fig. 15.
And step 31, driving n light sources to respectively illuminate with n preset exposure values, acquiring a detection image, and dividing the detection image into m areas.
And step 32, analyzing the position state of the endoscope, acquiring the illumination radiation degree of each area of each light source preset in the current position state, and correspondingly generating m calibration brightness equations.
And step 33, approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value.
And step 34', adjusting at least one of the exposure time and the exposure gain, driving the n light sources to illuminate correspondingly according to the calibration exposure values respectively, and acquiring a calibration image.
Wherein the preset exposure value and the calibration exposure value are configured as a product of an exposure time and an exposure gain of the endoscope.
This embodiment specifically defines a means for adjusting the exposure value, and since the exposure degree of the light source can be adjusted by parameters such as exposure time and exposure gain, the preset exposure value and the calibration exposure value can be adjusted by using the exposure precision value and the calibration correlation coefficient as described above, and at least the exposure precision value can be expressed as the product of the exposure gain and the exposure time (or at least the exposure precision value is configured to be influenced by the exposure gain and the exposure time, and at this time, the exposure precision value can be expressed as the exposure gain and the exposure time at the actual light source control level
Figure 217978DEST_PATH_IMAGE075
) And selecting one of the twoAnd once, carrying out adjustment.
Preferably, in the case where the exposure time can be configured to be sufficiently long, the exposure gain can be configured to be small to reduce noise of the detection image and the calibration image. For example, the exposure gain may be fixedly configured as 1, and the exposure time corresponding to the exposure precision value is configured as 1ms, then the exposure precision value may be at least expressed as:
Figure 785226DEST_PATH_IMAGE076
or
Figure 67171DEST_PATH_IMAGE077
After steps 31 to 33 and the derivation thereof, calibration exposure values corresponding to different light sources can be calculated, and the light source configuration includes the first light source
Figure 719869DEST_PATH_IMAGE078
A second light source
Figure 230616DEST_PATH_IMAGE079
A third light source
Figure 917950DEST_PATH_IMAGE080
And a fourth light source
Figure 872261DEST_PATH_IMAGE081
Respectively calculating corresponding four light sources to obtain a first calibration exposure value
Figure 12256DEST_PATH_IMAGE082
Second calibrated exposure value
Figure 123431DEST_PATH_IMAGE083
Third calibration exposure value
Figure 789905DEST_PATH_IMAGE084
And a fourth calibration exposure value
Figure 289019DEST_PATH_IMAGE085
. So that the four calibration exposure values mentioned above can be represented in turn as
Figure 526097DEST_PATH_IMAGE086
And
Figure 768859DEST_PATH_IMAGE087
of course, the exposure adjustment mode is not limited to the above technical solution, and the adjustment of the exposure degree can also be realized by changing the magnitude of the current flowing into the light source and/or the magnitude of the voltage applied to the two sides of the light source. Of course, the high-precision adjustment can also be realized by combining the above multiple modes, and the invention is not limited to this. Meanwhile, this embodiment provides only one specific embodiment for step 34, and naturally, step 34 provided in this embodiment may be applied to any of the above embodiments so that other embodiments have the technical effects of this embodiment. Likewise, the specific schemes in other embodiments can be combined and replaced.
In summary, the endoscope exposure control method provided by the invention divides the detection image into a plurality of regions after the detection image is acquired through one-time shooting, generates a calibration brightness equation according to the endoscope position condition and the illumination radiation degree of a plurality of light sources to the plurality of regions, and then approximates the equation to the target brightness value so as to obtain the standard exposure value to be applied to the endoscope, so that the brightness value in the detection image can be converted into the equation with the light source exposure value as a variable, and then the exposure value according with the target brightness value is obtained by calculation.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (9)

1. An endoscope exposure control method characterized by comprising:
driving n light sources to illuminate with n preset exposure values respectively, acquiring a detection image, and dividing the detection image into m areas; n is more than or equal to 2, and m is more than or equal to 2;
analyzing the position state of the endoscope, acquiring the illumination radiation degree of each light source preset in the current position state to each area, and correspondingly generating m calibration brightness equations; wherein the calibration intensity equation comprises a calibration exposure value to be determined;
approximating the calibration brightness equation to a preset target brightness value, and updating and storing the calibration exposure value;
driving the n light sources to respectively and correspondingly illuminate according to the calibration exposure values, and acquiring calibration images;
the method specifically comprises the following steps:
calculating m detection brightness values corresponding to the areas, and analyzing the detection distance value of the endoscope according to the detection brightness values; wherein the detection distance value is the distance between the actual position represented by the detection image and the endoscope;
Obtaining preset values according to the detection distance values
Figure 878509DEST_PATH_IMAGE001
Generating m calibration brightness equations correspondingly according to the brightness contribution values; wherein the brightness contribution value characterizes each of the light sources for each of the regionsThe degree of illumination radiation.
2. The endoscope exposure control method according to claim 1, wherein the preset exposure value is configured to have a linear change and satisfies:
Figure 805007DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 792555DEST_PATH_IMAGE003
is the preset exposure value of the jth light source,
Figure 921923DEST_PATH_IMAGE004
is the line correlation coefficient for the jth light source,
Figure 186682DEST_PATH_IMAGE005
is an exposure precision value;
the method further comprises the following steps:
driving one of the n light sources to illuminate by taking the exposure precision value as a preset exposure value, and keeping the other (n-1) light sources in an off state;
acquiring L area calibration images under preset L distance values; l is more than or equal to 2;
dividing the L area calibration images into m areas respectively, and analyzing the brightness value of each area to obtain
Figure 815241DEST_PATH_IMAGE006
A luminance contribution value forming a luminance contribution data group;
iteratively driving the other (n-1) light sources to independently illuminate by taking the exposure precision value as a preset exposure value to obtain n groups of brightness contribution data groups;
Forming a brightness contribution data table according to the n groups of brightness contribution data;
the method specifically comprises the following steps:
and analyzing to obtain m detection distance values according to the detection brightness value and the brightness contribution data table.
3. The endoscope exposure control method according to claim 2, characterized by further comprising:
according to the brightness contribution data table, fitting the m regions respectively to obtain m brightness prediction functions; the brightness prediction function is used for representing the relation between the detection distance and the brightness value of each region, and satisfies the following conditions:
Figure 555663DEST_PATH_IMAGE007
wherein
Figure 895247DEST_PATH_IMAGE008
Figure 342409DEST_PATH_IMAGE009
Figure 673027DEST_PATH_IMAGE010
The distance of the actual position characterized for the ith region from the endoscope,
Figure 150014DEST_PATH_IMAGE011
for the luminance prediction function corresponding to the ith area,
Figure 44020DEST_PATH_IMAGE012
the exposure value of the jth light source in the detection state,
Figure 548951DEST_PATH_IMAGE013
to be at a distance
Figure 988154DEST_PATH_IMAGE010
The brightness contribution value of the lower jth light source to the ith area;
the method specifically comprises the following steps:
and analyzing to obtain m detection distance values according to the detection brightness value and the inverse function of the brightness prediction function.
4. The endoscope exposure control method according to claim 2, characterized by specifically comprising:
searching and acquiring the brightness contribution data table corresponding to the distance value closest to the detection distance value
Figure 703169DEST_PATH_IMAGE001
Generating m calibration brightness equations correspondingly according to the brightness contribution values; wherein the calibration luminance equation is:
Figure 541812DEST_PATH_IMAGE014
said
Figure 3712DEST_PATH_IMAGE015
Is the calibration exposure value to be determined.
5. The endoscope exposure control method according to claim 4, wherein the calibration brightness equation is:
Figure 394242DEST_PATH_IMAGE016
wherein, the
Figure 206340DEST_PATH_IMAGE017
For calibration correlation coefficients, the calibration exposure value to be determined and the exposure precision value satisfy:
Figure 724040DEST_PATH_IMAGE018
the method specifically comprises the following steps:
approximating the calibration brightness equation to a preset target brightness value by using an optimization method, and updating and storing the calibration correlation coefficient;
and calculating to obtain the calibration exposure value according to the calibration correlation coefficient and the exposure precision value.
6. The endoscope exposure control method of claim 5, wherein the optimization method is configured to be in accordance with a formula
Figure 797038DEST_PATH_IMAGE019
Carrying out calibration; wherein, the
Figure 342158DEST_PATH_IMAGE020
Is the target brightness value.
7. The endoscope exposure control method according to claim 1, wherein the preset exposure value is configured to have a linear change and satisfies:
Figure 235028DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 759681DEST_PATH_IMAGE003
is the preset exposure value of the jth light source,
Figure 687186DEST_PATH_IMAGE004
Is the line correlation coefficient for the jth light source,
Figure 294885DEST_PATH_IMAGE005
is an exposure precision value; the method further comprises the following steps:
driving the n light sources to respectively illuminate by taking the exposure precision values as preset exposure values;
acquiring L pieces of pixel calibration images under preset L distance values; l is more than or equal to 2;
randomly selecting at least one calibration pixel in each pixel calibration image;
analyzing the brightness value of the calibration pixel, and fitting the relationship between the brightness value and the distance between the actual position represented by the calibration pixel and the endoscope to form a brightness prediction relational expression;
the method specifically comprises the following steps:
acquiring a reference image, driving n light sources to illuminate with n preset exposure values respectively, acquiring the detection image corresponding to the reference image, and dividing the detection image into m areas;
acquiring a reference distance value between an actual position represented by at least one reference pixel corresponding to the calibration pixel and the endoscope in the reference image, and acquiring a standard brightness value of a standard pixel corresponding to the reference pixel in the detection image;
calculating to obtain a brightness predicted value according to the reference distance value and the brightness prediction relational expression;
Calibrating the standard brightness value to the brightness predicted value to obtain an output correction factor;
randomly selecting and analyzing the brightness value of at least one pixel in each region, and calibrating the brightness value by using the output correction factor to obtain m detection brightness values.
8. The endoscope exposure control method according to claim 1, wherein the number m of the regions is configured to be 9, and an area of a region located at a center of the inspection image is larger than areas of other regions in the inspection image; the number n of the light sources is configured to be 4;
the preset exposure value and the calibration exposure value are configured as a product of an exposure time and an exposure gain of the endoscope; the method further comprises the following steps:
and adjusting at least one of the exposure time and the exposure gain, driving the n light sources to respectively illuminate correspondingly according to the calibration exposure value, and acquiring a calibration image.
9. An endoscope equipped with the endoscope exposure control method according to any one of claims 1 to 8 and performing exposure control.
CN202210418273.2A 2022-04-21 2022-04-21 Endoscope exposure control method and endoscope Active CN114504293B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210418273.2A CN114504293B (en) 2022-04-21 2022-04-21 Endoscope exposure control method and endoscope
PCT/CN2023/089554 WO2023202673A1 (en) 2022-04-21 2023-04-20 Endoscope exposure control method and endoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210418273.2A CN114504293B (en) 2022-04-21 2022-04-21 Endoscope exposure control method and endoscope

Publications (2)

Publication Number Publication Date
CN114504293A CN114504293A (en) 2022-05-17
CN114504293B true CN114504293B (en) 2022-07-29

Family

ID=81555280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210418273.2A Active CN114504293B (en) 2022-04-21 2022-04-21 Endoscope exposure control method and endoscope

Country Status (2)

Country Link
CN (1) CN114504293B (en)
WO (1) WO2023202673A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114504293B (en) * 2022-04-21 2022-07-29 安翰科技(武汉)股份有限公司 Endoscope exposure control method and endoscope
CN117129481B (en) * 2023-10-27 2023-12-29 南京华视智能科技股份有限公司 Method for improving light source in detection system of lithium battery industry

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2605425B2 (en) * 1989-10-19 1997-04-30 富士写真光機株式会社 Electronic endoscope imaging device
JP3096326B2 (en) * 1991-09-12 2000-10-10 オリンパス光学工業株式会社 Endoscope device
JP4218723B2 (en) * 2006-10-19 2009-02-04 ソニー株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
CN101262567B (en) * 2008-04-07 2010-12-08 北京中星微电子有限公司 Automatic exposure method and device
JP5385469B2 (en) * 2011-01-20 2014-01-08 オリンパスメディカルシステムズ株式会社 Capsule endoscope
JP2014176449A (en) * 2013-03-14 2014-09-25 Panasonic Corp Endoscope
JP2018157918A (en) * 2017-03-22 2018-10-11 ソニー株式会社 Control device for surgery, control method, surgical system, and program
JP7398939B2 (en) * 2019-12-03 2023-12-15 キヤノン株式会社 Image processing device and its control method, imaging device, program, and storage medium
CN114504293B (en) * 2022-04-21 2022-07-29 安翰科技(武汉)股份有限公司 Endoscope exposure control method and endoscope
CN114795080B (en) * 2022-04-21 2024-04-09 安翰科技(武汉)股份有限公司 Endoscope exposure control method and endoscope

Also Published As

Publication number Publication date
WO2023202673A1 (en) 2023-10-26
CN114504293A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN114504293B (en) Endoscope exposure control method and endoscope
US9588046B2 (en) Fluorescence observation apparatus
JP7076368B2 (en) Range gate type depth camera parts
CN114795080B (en) Endoscope exposure control method and endoscope
CN101902961B (en) Device, system and method for estimating the size of an object in a body lumen
JP7068487B2 (en) Electronic endoscopy system
JP7318130B2 (en) Self-adaptive adjustment method and adjustment system for brightness of projection device
WO2008008231A2 (en) Systems and methods for generating fluorescent light images
US20200400795A1 (en) Noise aware edge enhancement in a pulsed laser mapping imaging system
JP6907398B2 (en) Endoscope system
US10939856B2 (en) Processor device, endoscope system, and image processing method
US20200404130A1 (en) Laser scanning and tool tracking imaging in a light deficient environment
CN105890746B (en) Light distribution characteristic measurement device and light distribution characteristic measurement method
US20200397298A1 (en) Noise aware edge enhancement in a pulsed hyperspectral, fluorescence, and laser mapping imaging system
US8415641B2 (en) Fluorescence observation device
JPWO2020188825A1 (en) Endoscope system
WO2020064737A1 (en) A handheld imaging element with a movement sensor
JP2000139914A (en) Ultrasonograph
US10130312B2 (en) Medical imaging apparatus and method of correcting medical image data based on output display characteristics in order to minimize discrepancies between the image data and the image displayed
US11490784B2 (en) Endoscope apparatus
US8830310B2 (en) Capsule endoscope
JP7455716B2 (en) Endoscope processor and endoscope system
CN114052791A (en) Ultrasonic imaging method and system
WO2019142689A1 (en) Processor for electronic endoscope and electronic endoscope system
CN112823509A (en) Method and system for estimating exposure time of multispectral light source

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant