CN110786000B - Exposure adjusting method and device - Google Patents

Exposure adjusting method and device Download PDF

Info

Publication number
CN110786000B
CN110786000B CN201880039804.9A CN201880039804A CN110786000B CN 110786000 B CN110786000 B CN 110786000B CN 201880039804 A CN201880039804 A CN 201880039804A CN 110786000 B CN110786000 B CN 110786000B
Authority
CN
China
Prior art keywords
target
exposure
exposure time
value
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201880039804.9A
Other languages
Chinese (zh)
Other versions
CN110786000A (en
Inventor
赵超
任伟
常坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110786000A publication Critical patent/CN110786000A/en
Application granted granted Critical
Publication of CN110786000B publication Critical patent/CN110786000B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Abstract

An exposure adjustment method and apparatus. When the shooting device shoots the current frame image, the current frame image is not shot by adopting a fixed exposure time ratio, but the exposure time of the shooting device is adjusted according to the target exposure gain, the target corner value and the target pixel characteristic value, so that the exposure time of the shooting device can be matched with the current application scene, and the high-quality image is finally shot.

Description

Exposure adjusting method and device
Technical Field
The present application relates to the field of image processing, and in particular, to an exposure adjustment method and apparatus.
Background
In application, if the shooting device does not have a High Dynamic Range image (HDR) function, the shooting device can only shoot a certain part of scenes such as High-brightness scenes or low-brightness scenes under scenes with large brightness Dynamic ranges. Whereas if the photographing apparatus has the HDR function, an image satisfying the requirements can be photographed by employing the exposure adjustment strategy.
In application, the exposure adjustment strategy employed by a camera with HDR capability divides the overall exposure time into the following two segments: a first exposure time and a second exposure time. The ratio of the second exposure time to the whole exposure time is called an exposure time ratio, and the exposure time ratio is used for representing the brightness dynamic range of the shooting device.
In the current exposure adjustment strategy, the exposure time ratio of the camera is fixed. The fixed exposure time ratio means that the luminance dynamic range of the photographing device is fixed, so that the photographing device can only photograph a high-quality image in a certain fixed scene (a scene corresponding to the fixed luminance dynamic range), but cannot photograph a high-quality image in other scenes, for example, when the luminance dynamic range of the scene is lower than the fixed luminance dynamic range, the photographed image has a contour loss, and when the luminance dynamic range of the scene is higher than the fixed luminance dynamic range, the photographed image has a significant overexposure area.
Disclosure of Invention
The application discloses an exposure adjusting method and device, which aim to improve image shooting quality by dynamically adjusting the exposure time of a shooting device.
In one example, the application discloses an exposure adjustment method applied to a shooting device, comprising the following steps:
acquiring a target exposure gain;
acquiring a target inflection value by using the target exposure gain;
determining a target pixel characteristic value according to the pixel characteristic information of at least one frame of shot image;
and adjusting the exposure time of the shooting device according to the target exposure gain, the target inflection point value and the target pixel characteristic value.
As one embodiment, the acquiring a target inflection point value using the target exposure gain includes:
searching an inflection point value corresponding to the target exposure gain in the established mapping relation between the exposure gain and the inflection point value;
and determining the found inflection point value as the target inflection point value.
As an embodiment, the determining a target pixel feature value according to pixel feature information of at least one captured frame of image includes:
acquiring pixel characteristic information of at least one shot frame of image;
determining a state code grade for representing a target pixel characteristic value according to the pixel characteristic information;
and determining the characteristic value of the target pixel according to the state code grade.
As one embodiment, the determining the target pixel characteristic value according to the status code level includes:
searching a pixel characteristic value corresponding to the state code grade in the established mapping relation between the state code grade and the pixel characteristic value;
and determining the searched pixel characteristic value as the target pixel characteristic value.
As an embodiment, the target pixel characteristic value is a maximum gray value.
As one embodiment, the pixel characteristic information includes at least: a pixel histogram of gray levels;
the determining the state code level for characterizing the target pixel feature value according to the pixel feature information comprises:
determining a pixel gray maximum value in a pixel histogram of the gray;
determining a target gray scale segment corresponding to the maximum value of the pixel gray scale in each divided gray scale segment;
counting the number N of pixels with the gray values of the maximum pixel gray values in at least one shot frame of image;
and determining the state code grade according to the target gray scale segment and the N.
As an embodiment, the adjusting the exposure time of the camera according to the target exposure gain, the target inflection point value, and the target pixel feature value includes:
calculating the exposure time ratio to be exposed according to the target exposure gain, the target inflection point value and the target pixel characteristic value;
and adjusting the exposure time according to the exposure time ratio.
As an example, the exposure time includes a first exposure time and a second exposure time;
the exposure time ratio refers to the ratio of the second period of exposure time to the exposure time in the exposure time.
As an embodiment, the adjusting the exposure time according to the exposure time ratio includes:
and adjusting the second period of exposure time in the exposure time according to the exposure time ratio.
As an embodiment, the calculating the exposure time ratio to be exposed according to the target exposure gain, the target inflection point value, and the target pixel feature value includes:
determining a second exposure time in the exposure time according to the target exposure gain, the target pixel characteristic value and the target inflection point value;
and calculating the exposure time ratio according to the exposure time and the second period of exposure time.
As an embodiment, the adjusting the exposure time to be exposed according to the target exposure gain, the target inflection point value, and the target pixel feature value includes:
calculating second-stage exposure time according to the target exposure gain, the target inflection value and the target pixel characteristic value;
and determining the calculated second exposure time as a second exposure time in the exposure times.
As an embodiment, the determining the second exposure time of the exposure times according to the target exposure gain, the target inflection point value and the target pixel feature value comprises:
calculating the actual brightness REAL _ DST corresponding to the exposure according to the target exposure gain;
and calculating a second period of exposure time by using the REAL _ DST, the target pixel characteristic value and the target inflection value.
As an embodiment, the calculating the actual brightness REAL _ DST corresponding to the exposure according to the target exposure gain includes:
determining the REAL brightness REAL _ SRC0 after the corresponding gain according to the exposure information corresponding to the shot at least one frame of image;
and calculating the actual brightness REAL _ DST corresponding to the exposure by using the REAL _ SRC0 and the target exposure gain.
As an embodiment, the exposure information includes at least: a second exposure time T1_ SRC of the pixel characteristic maximum MAX _ SRC, the inflection point value Knee _ SRC, the exposure time T0_ SRC and the exposure time T0_ SRC of the previous frame image;
the determining the gained REAL brightness REAL _ SRC0 according to the exposure information corresponding to the captured at least one frame of image includes:
calculating a difference D1 between the MAX _ SRC and the Knee _ SRC;
calculating the REAL _ SRC0 according to the D1, the T1_ SRC, and the T0_ SRC.
As an embodiment, the calculating the REAL brightness REAL _ DST corresponding to the exposure by using REAL _ SRC0 and the target exposure gain includes:
calculating Gain REAL brightness REAL _ SRC1 corresponding to the exposure information by using the REAL _ SRC0 and exposure Gain _ SRC corresponding to at least one frame of photographed image;
and calculating the REAL _ DST according to the REAL _ SRC1 and the target exposure gain.
As one embodiment, the calculating the REAL DST from REAL SRC and the target exposure gain includes:
calculating a target REAL brightness REAL _ DST0 associated with time according to the REAL _ SRC1, the T0_ SRC and the exposure time T0_ DST to be exposed;
and calculating the REAL _ DST according to the REAL _ DST0 and the target exposure gain.
As one embodiment, the calculating the second segment exposure time using the REAL DST, the target pixel feature value, and the target inflection point value includes:
calculating a difference value D2 between the target pixel characteristic value and the target inflection value;
and calculating the second exposure time according to the D2, the exposure time and the REAL _ DST.
In one example, the present application discloses a photographing apparatus comprising:
a computer-readable storage medium having stored thereon a computer program;
a processor for reading the computer program and adjusting the exposure time of the photographing apparatus by implementing the method as described above by executing the computer program.
In one example, the present application discloses a photographing apparatus comprising:
a computer-readable storage medium having stored thereon a computer program;
a processor, configured to read the computer program, and obtain an adjustment parameter related to an exposure time T0_ DST to be exposed by executing the computer program, and adjust a second segment of exposure time T1_ DST in the exposure time T0_ DST according to the obtained adjustment parameter and a specified formula;
wherein the adjusting parameters at least comprise: a target exposure Gain _ DST, a target inflection point value Knee _ DST, a target pixel characteristic value MAX _ DST, a pixel characteristic maximum value MAX _ SRC of a previous frame image, an inflection point value Knee _ SRC, an exposure time T0_ SRC, and a ratio f _ SRC of a second period of exposure time T1_ in the exposure time to _ SRC in the exposure time T0_ SRC to be exposed;
the specified formula is:
Figure BDA0002317682200000041
or the following steps:
Figure BDA0002317682200000042
the f _ DST is a ratio of T1_ DST in the exposure time T0_ DST.
According to the technical scheme, when the shooting device shoots the current frame image, the current frame image is not shot by adopting a fixed exposure time ratio, but the exposure time of the shooting device is adjusted according to the target exposure gain, the target corner value and the target pixel characteristic value, so that the exposure time of the shooting device can be matched with the current application scene, and the high-quality image is shot finally.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an example of an exposure adjustment method provided in embodiment 1 of the present application;
fig. 2 is a flowchart illustrating an example of an exposure adjustment method according to embodiment 2 of the present application;
FIG. 3 is a flowchart of step 204 provided in embodiment 2 of the present application;
fig. 4 is a flowchart illustrating an example of an exposure adjustment method provided in embodiment 3 of the present application;
FIG. 5 is a flowchart of an embodiment of a second exposure period provided herein;
FIG. 6 is a schematic illustration of an exposure profile provided herein;
fig. 7 is a flowchart illustrating an example of an exposure adjustment method according to embodiment 4 of the present application;
fig. 8 is a diagram illustrating the structure of the apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example 1:
referring to fig. 1, fig. 1 is a flowchart illustrating an exposure adjustment method provided in embodiment 1 of the present application. The flow is applied to a shooting device. The shooting device can be applied to shooting devices in different fields such as unmanned aerial vehicles, robots, unmanned driving and security monitoring. Use to be applied to the unmanned aerial vehicle field of taking photo by plane as an example, unmanned aerial vehicle system of taking photo by plane includes aircraft side and remote controller side, and the shooting device of aircraft side can be applied to in the flow shown in figure 1, also can be applied to the shooting device of remote controller side, and perhaps, some steps are applied to the shooting device of aircraft side in the flow shown in figure 1, and the shooting device etc. of remote controller side are applied to in another part step, and this application does not do specifically and restricts.
Specifically, in one example, the shooting device herein may be a video camera, a still camera, or the like having a camera.
As shown in fig. 1, the process may include the following steps:
step 101, obtaining a target exposure gain.
In this application, when the image capturing device is about to capture an image, the exposure gain required for capturing the image is obtained first, where the exposure gain is the target exposure gain in step 101. For convenience of description, the present application refers to an image to be photographed as a current frame image.
As an embodiment, in step 101, acquiring the target exposure gain may include: receiving an externally input target exposure gain; alternatively, the target exposure gain is read from a specified storage medium. The present application does not specify how the target exposure gain is to be obtained.
And 102, acquiring a target inflection value by using the target exposure gain.
Here, the target inflection value is for suppressing a highlight portion in the image, eliminating the processing load of the back end. In specific implementation, the shooting device needs to go through two exposure periods when shooting the current frame image: the system comprises a first exposure time and a second exposure time, wherein the shooting device can forcibly set the brightness of a pixel Point with the brightness larger than a target inflection Point value (Knee Point) to be the target inflection Point value within the first exposure time. In the first exposure time, the brightness of the pixel points with the brightness larger than the target inflection point value is forcibly set as the target inflection point value, so that the highlight part in the whole current frame image shot finally is suppressed, and the processing burden of the rear end is avoided.
In a specific implementation, there are many implementations of obtaining the target inflection value by using the target exposure gain in step 102, and this application describes, by way of example, one of the implementations:
in one implementation, the mapping relationship between the exposure gain and the inflection point value needs to be established in advance, where the mapping relationship between the exposure gain and the inflection point value can be established in accordance with the purpose of optimizing the image quality. The image quality is optimal, the corner value under each exposure gain can reach the optimal value, and flicker and overexposure caused by corner adjustment are avoided.
In one implementation, based on the mapping relationship between the exposure gain and the inflection point, the obtaining the target inflection point using the target exposure gain in step 102 may include:
searching an inflection point value corresponding to the target exposure gain in the established mapping relation between the exposure gain and the inflection point value;
and determining the found inflection point value as the target inflection point value.
Step 103, determining a target pixel characteristic value according to the pixel characteristic information of the at least one frame of photographed image.
As an example, the pixel feature information here may be feature information related to gray scale. Accordingly, the target pixel characteristic value here may be the maximum gray value.
And 104, adjusting the exposure time of the shooting device according to the target exposure gain, the target inflection point value and the target pixel characteristic value.
Specifically, the adjusting of the exposure time of the photographing device may be adjusting a second period of time in the exposure time of the photographing device for photographing the current frame image, which will be described in detail below, and will not be described herein again.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the flow shown in fig. 1, in the present application, when the shooting device shoots the current frame image, the shooting device does not shoot the current frame image by using the fixed exposure time ratio, but adjusts the exposure time of the shooting device according to the target exposure gain, the target inflection point value, and the target pixel feature value, which can ensure that the exposure time of the shooting device matches with the current application scene, and finally shoots a high-quality image.
It should be noted that, as an embodiment, the brightness of the pixel point mentioned above may be represented by a gray value, a voltage value, and the like of the pixel point, and the application is not particularly limited.
Embodiment 1 is described above.
Example 2:
referring to fig. 2, fig. 2 is a flowchart illustrating an exposure adjustment method according to embodiment 2 of the present application. The flow is applied to a shooting device. The photographing device is as described in the embodiments, and is not described in detail.
As shown in fig. 2, the process may include the following steps:
step 201 and step 202 are similar to step 101 and step 102 in embodiment 1, respectively, and are not described again.
Step 203, acquiring pixel characteristic information of at least one frame of photographed image.
In an example, this embodiment 2 takes the example of acquiring the pixel feature information of a captured frame image (compared with the current frame image, referred to as the previous frame image).
And step 204, determining the level of the state code for representing the characteristic value of the target pixel according to the pixel characteristic information.
In one example, the gray value parameter of the image can represent the brightness level of the image, and based on this, the pixel feature information obtained in step 203 may be a pixel histogram of gray levels as one embodiment.
Taking the pixel histogram with the pixel characteristic information obtained in step 203 as the gray level as an example, in step 204, the determining the state code level for representing the target pixel characteristic value according to the pixel characteristic information may include the process shown in fig. 3, which is described below specifically and will not be described herein again.
Step 205, determining the characteristic value of the target pixel according to the status code level.
In a specific implementation, there are many implementation manners for determining the target pixel feature value according to the state code level in step 205, and this application includes one of the implementation manners:
in one implementation, the mapping relationship between the status code level and the pixel characteristic value needs to be established in advance. Here, the mapping relationship between the status code level and the pixel feature value is established for the purpose of suppressing and boosting the scene. Based on the mapping relationship between the status code level and the pixel feature value, the determining the target pixel feature value according to the status code level may include: and searching a pixel characteristic value corresponding to the state code grade in the established mapping relation between the state code grade and the pixel characteristic value, and determining the searched pixel characteristic value as the target pixel characteristic value.
The above steps 203 to 205 are an implementation manner of determining the target pixel feature value according to the pixel feature information of the at least one captured frame image in the above step 103.
Step 206 is similar to step 104 in embodiment 1 and will not be described again.
Thus, the flow shown in fig. 2 is completed.
Through the process shown in fig. 2, in the present application, when the shooting device shoots the current frame image, the current frame image is not shot by using the fixed exposure time ratio, but the exposure time of the shooting device is adjusted according to the target exposure gain, the target inflection point value and the target pixel characteristic value, which can ensure that the exposure time of the shooting device is matched with the current application scene, and finally a high-quality image is shot.
In this embodiment 2, one implementation manner of determining the status code level used for characterizing the target pixel feature value according to the pixel feature information in step 204 is shown in fig. 3:
referring to fig. 3, fig. 3 is a flowchart of step 204 implementation provided in embodiment 2 of the present application. The above-mentioned previous frame image is taken as an example of the process, and the pixel feature information of the previous frame image at least includes a pixel histogram of gray scale. Here, the pixel histogram of the gray level is used to represent the distribution of the gray level value of each pixel in the previous frame image.
As shown in fig. 3, the process may include the following steps:
step 301, determining a maximum value of the pixel gray scale in the pixel histogram of the gray scale, and determining a target maximum value segment of the pixel gray scale from the divided maximum value segments of the gray scale.
In one example, each divided gray maximum value segment is divided in advance according to actual requirements, such as the following: three segments of [ 0-224 ], (224-255) and [255 ].
In step 302, a target gray segment is determined among the divided gray segments.
In one example, each divided gray scale segment is divided in advance according to actual needs, which may be different from the above-described target gray scale maximum value segment as one embodiment. For example, 8 gray scale segments are divided from a gray scale range of 0 to 255, each gray scale segment comprises 32 gray scale values, the first gray scale segment is [0 to 31], the second gray scale segment is [32 to 63], the third gray scale segment is [64 to 95], the fourth gray scale segment is [96 to 127], and so on, the 32 th gray scale segment is [224 to 255 ]. It should be noted that, in the present embodiment, the specific dividing manner of the gray scale segments may be set according to actual needs, and this is merely an exemplary description.
In this step 302, as an embodiment, the target gray scale segment may include at least two gray scale segments.
Here, as an embodiment, each of the gray scale segments included in the target gray scale segment in the present step 302 may be a gray scale segment which is continuous with each other and is closest to the maximum value of the gray scale.
Step 303, counting the number of pixels with gray values located in the target gray segment from the at least one captured frame of image.
Here, the at least one captured frame image may be the previous frame image described above.
Step 304, determining the status code level according to the target gray maximum value segment and the number of pixels in the target gray segment.
In this embodiment, a mapping relationship among the gray scale segments, the number of pixels, and the level of the state code is established in advance. The following describes the mapping relationship by way of example:
if the maximum value of the pixel gray levels is within [ 0-224 ] and the target gray level is segmented into (160-192 ] (192-224), when the number of pixels with gray levels within (160-192) in the image of the previous frame is smaller than a first threshold value and the number of pixels with gray levels within (192-224) is smaller than a second threshold value, the state code level is determined to be 5, which indicates that the brightness of the current frame needs to be enhanced, and the enhancement amplitude is related to the state code level, similarly, when the number of pixels with gray levels within (160-192) is larger than a third threshold value and the number of pixels with gray levels within (192-224) is smaller than a fourth threshold value, the state code level is determined to be 4, which indicates that the brightness of the current frame needs to be enhanced, but the enhancement degree is smaller than the degree when the state code level is 5, and so on, when the number of pixels with gray levels within (160-192) is smaller than a fifth threshold value, And determining the state code level to be 3 when the number of the pixels with the gray values (192-224) is greater than a sixth threshold, and determining the state code level to be 2 when the number of the pixels with the gray values (160-192) is greater than a seventh threshold and the number of the pixels with the gray values (192-224) is greater than an eighth threshold, wherein the smaller the state code level is, the smaller the degree to which the brightness of the current frame needs to be enhanced.
If the maximum value of the pixel gray levels is within (224-255), the target gray levels are segmented into (192-224) and (224-255), when the number of pixels with gray values in (192-224) in the previous frame image is smaller than a ninth threshold value and the number of pixels with gray values in (224-255) is smaller than a tenth threshold value, the state code level is determined to be 1, indicating that the brightness of the current frame needs to be slightly enhanced, the enhancement amplitude is related to the state code level, when the number of pixels with gray values in (192-224) is larger than an eleventh threshold value and the number of pixels with gray values in (224-255) is smaller than a twelfth threshold value, the state code level is determined to be 1, indicating that the brightness of the current frame needs to be slightly enhanced, the enhancement amplitude is related to the state code level, when the number of pixels with gray values in (192-224) is smaller than a thirteenth threshold value and the number of pixels with gray values in (224-255) is larger than a fourteenth threshold value, determining the state code level as 0, indicating that the brightness of the current frame can be not enhanced; and when the number of the pixels with the gray values in (192-224) is greater than a fifteenth threshold and the number of the pixels with the gray values in (224-255) is greater than a sixteenth threshold, determining that the status code level is 0, which indicates that the brightness of the current frame may not be enhanced.
If the maximum value of the pixel gray is 255, the target gray is segmented into (192-224) and (224-255), and when the number of the pixels with the gray value of 255 is counted to be less than a preset threshold (thresh), then:
determining a state code level as-1 if the number of pixels with gray scale values (192-224) is less than a seventeenth threshold and the number of pixels with gray scale values (224-255) is less than an eighteenth threshold, indicating that the brightness of the current frame needs to be suppressed, the degree of suppression being related to the state code level, determining a state code level as-2 if the number of pixels with gray scale values (192-224) is greater than a nineteenth threshold and the number of pixels with gray scale values (224-255) is less than a twentieth threshold, indicating that the brightness of the current frame needs to be suppressed to a greater degree, and so on, determining a state code level as-4 if the number of pixels with gray scale values (192-224) is less than a twenty-first threshold and the number of pixels with gray scale values (224-255) is greater than a twenty-second threshold, and determining a state code level as-4 if the number of pixels with gray scale values (192-224) is greater than a twenty-third threshold, And the number of pixels in the gray scale value (224-255) is greater than the twenty-fourth threshold value, the status code level is determined to be-5, and the degree of suppression of the luminance of the current frame is more increased as the absolute value of the status code level increases, as one embodiment, the seventeenth threshold value may be equal to or different from the ninth threshold value, similarly, the eighteenth threshold value may be equal to or different from the tenth threshold value, the nineteenth threshold value may be equal to or different from the eleventh threshold value, the twentieth threshold value may be equal to or different from the twelfth threshold value, the twenty-first threshold value may be equal to or different from the thirteenth threshold value, the twenty-second threshold value may be equal to or different from the fourteenth threshold value, the twenty-third threshold value may be equal to or different from the fifteenth threshold value, the twenty-fourth threshold value may be equal to the sixteenth threshold value, or may be unequal.
If the maximum value of the pixel gray is 255, the target gray is segmented into (192-224) and (224-255), and when the number of the pixels with the gray value of 255 is counted to be greater than the preset threshold, then:
determining a state code level of-3 if the number of pixels having the gray scale values (192-224) is less than a twenty-fifth threshold and the number of pixels having the gray scale values (224-255) is less than a twenty-sixth threshold, indicating that the brightness of the current frame needs to be suppressed, the degree of suppression being related to the state code level, determining a state code level of-4 if the number of pixels having the gray scale values (192-224) is greater than a twenty-seventh threshold and the number of pixels having the gray scale values (224-255) is less than a twenty-eighth threshold, indicating that the brightness of the current frame needs to be suppressed is greater, and so on, determining a state code level of-5 if the number of pixels having the gray scale values (192-224) is less than a twenty-ninth threshold and the number of pixels having the gray scale values (224-255) is greater than a thirty-third threshold, determining a state code level of-5 if the number of pixels having the gray scale values (192-224) is greater than a thirty-eleventh threshold, And the number of pixels in the gray scale values (224-255) is greater than the thirty-second threshold value, it is determined that the state code level is-6, and the degree to which the luminance of the current frame needs to be suppressed is greater as the absolute value of the state code level is larger, as an embodiment, the twenty-fifth threshold value may be equal to or different from the ninth threshold value, similarly, the twenty-sixth threshold value may be equal to or different from the tenth threshold value, the twenty-seventh threshold value may be equal to or different from the eleventh threshold value, the twenty-eighth threshold value may be equal to or different from the twelfth threshold value, the twenty-ninth threshold value may be equal to or different from the thirteenth threshold value, the thirty-third threshold value may be equal to or different from the fourteenth threshold value, or may be unequal. The thirty-second threshold may be equal to or different from the sixteenth threshold.
So far, based on the mapping relationship, in this step 304, the state code level is easily determined according to the mapping relationship, the target gray maximum value segment, and the number of pixels located in the target gray segment, which are similar to the above description and are not repeated here.
The flow shown in fig. 3 is completed.
The determination of the state code level for characterizing the target pixel feature value according to the pixel feature information in step 204 is realized by the flowchart shown in fig. 3. It should be noted that the flow shown in fig. 3 is only one implementation manner of determining the level of the state code used for characterizing the target pixel feature value according to the pixel feature information in step 204, and is not limited thereto.
Embodiment 2 is described above.
Example 3:
referring to fig. 4, fig. 4 is a flowchart illustrating an exposure adjustment method provided in embodiment 3 of the present application. The flow is applied to a shooting device. The photographing device is as described in the embodiments, and is not described in detail.
As shown in fig. 4, the process may include the following steps:
steps 401 to 403 are similar to steps 101 to 103 in embodiment 1, respectively, and are not described again.
Step 404, calculating an exposure time ratio according to the target exposure gain, the target inflection point value and the target pixel feature value.
The exposure time ratio here refers to the ratio of the second exposure time to the exposure time in the exposure time of the current frame image.
Step 405, adjusting a second period of exposure time in the exposure time according to the exposure time ratio.
Once the exposure time ratio is fixed, the second exposure time can be determined from the whole exposure time according to the exposure time ratio.
In one example, the above steps 404 to 405 are one implementation manner of adjusting the exposure time of the camera according to the target exposure gain, the target inflection point value, and the target pixel feature value in step 104.
The flow shown in fig. 4 is completed.
In this embodiment, in the step 404, calculating the exposure time ratio to be exposed according to the target exposure gain, the target inflection point value and the target pixel feature value may include: determining a second exposure time in the exposure time according to the target exposure gain, the target pixel characteristic value and the target inflection point value; and calculating the exposure time ratio according to the exposure time and the second period of exposure time.
In one example, the determining the second exposure time in the exposure time according to the target exposure gain, the target inflection point value, and the target pixel feature value may include the process shown in fig. 5:
step 501, calculating the actual brightness REAL _ DST corresponding to the exposure according to the target exposure gain.
In one example, calculating the REAL brightness REAL _ DST corresponding to the exposure according to the target exposure gain may include: determining the REAL brightness REAL _ SRC0 after the corresponding gain according to the exposure information corresponding to the shot at least one frame of image, and calculating the REAL brightness REAL _ DST corresponding to the exposure by using the REAL _ SRC0 and the target exposure gain.
As an embodiment, determining the corresponding post-gain REAL brightness REAL _ SRC0 according to the exposure information corresponding to the captured at least one frame of image may include: calculating a difference D1 between the MAX _ SRC and the Knee _ SRC, and calculating the REAL _ SRC0 according to the D1, the T1_ SRC, and the T0_ SRC. The following equation 1 shows the calculation equation of REAL _ SRC 0:
Figure BDA0002317682200000111
wherein, MAX _ SRC is the maximum pixel feature value of the previous frame of image, Knee _ SRC is the corner value of the previous frame of image, exposure time T0_ SRC is the exposure time of the previous frame of image, and T1_ SRC is the second segment of exposure time in exposure time T0_ SRC.
As an embodiment, the calculating the REAL brightness REAL _ DST corresponding to the exposure by using REAL _ SRC0 and the target exposure gain may include: and calculating the Gain REAL brightness REAL _ SRC1 corresponding to the exposure information by using the REAL _ SRC0 and the exposure Gain _ SRC corresponding to at least one frame of photographed image, and calculating the REAL _ DST according to the REAL _ SRC1 and the target exposure Gain.
In an example, the REAL luminance read _ SRC1 after calculating the Gain corresponding to the exposure information by using the read _ SRC0 and the exposure Gain _ SRC corresponding to at least one captured frame of image can be calculated by equation 2:
Figure BDA0002317682200000121
wherein, Gain _ SRC is the exposure Gain of the previous frame of image.
In one example, the calculating the REAL _ DST according to the REAL _ SRC and the target exposure gain may include: calculating a target REAL brightness REAL _ DST0 associated with time according to the REAL _ SRC1, the T0_ SRC and the exposure time T0_ DST to be exposed; and calculating the REAL _ DST according to the REAL _ DST0 and the target exposure gain.
As an example, REAL _ DST0 may be calculated by the following equation 3:
Figure BDA0002317682200000122
where T0_ DST is the exposure time of the current frame image.
As an example, the above calculation of the target exposure gain according to REAL _ DST0 may be calculated by the following equation 4:
REAL _ DST ═ REAL _ DST0 × Gain _ DST (equation 4)
Where Gain _ DST is the target exposure Gain described above.
Step 502, calculating a second exposure time by using the REAL _ DST, the target pixel characteristic value and the target inflection value.
In one example, the step 502 of calculating the second exposure time using the REAL _ DST, the target pixel feature value, and the target inflection point value may include:
calculating a difference value D2 between the target pixel characteristic value and the target inflection value;
and calculating the second exposure time according to the D2, the exposure time and the REAL _ DST.
Here, as an example, the second exposure time may be calculated by the following equation 5:
Figure BDA0002317682200000123
wherein, T1_ DST is the second exposure time of the current frame image, MAX _ DST is the target pixel characteristic value, and Knee _ DST is the target inflection point value.
For ease of understanding, fig. 6 shows the above-described respective parameters relating to the second exposure time of the current frame image.
The determination of the second exposure time can be realized by the flow shown in fig. 5. Based on the second exposure period determined in fig. 5, the exposure period ratio in step 405 can be calculated according to the following equation 6:
Figure BDA0002317682200000124
based on the determined exposure time ratio, in this embodiment, the second exposure time of the current frame image may be adjusted according to the determined exposure time ratio.
Embodiment 3 was described above.
Example 4:
referring to fig. 7, fig. 7 is a flowchart illustrating an exposure adjustment method according to embodiment 4 of the present application. The flow is applied to a shooting device. The photographing device is as described in the embodiments, and is not described in detail.
As shown in fig. 7, the process may include the following steps:
steps 701 to 703 are similar to steps 101 to 103 in embodiment 1, respectively, and are not described again.
Step 704, calculating a second exposure time according to the target exposure gain, the target inflection point value and the target pixel characteristic value.
Here, the implementation of calculating the second exposure time is as described in embodiment 3.
Step 705, determining the calculated second exposure time as the second exposure time in the exposure time of the current frame image.
The flow shown in fig. 7 is completed.
The second exposure period of the entire exposure period can be calculated for the current frame image by the flow shown in fig. 7.
Example 4 was described above.
The following describes the apparatus provided in the present application:
referring to fig. 8, fig. 8 is a diagram illustrating the structure of the apparatus according to the present invention. As shown in fig. 8, the photographing apparatus may include:
a computer-readable storage medium having stored thereon a computer program;
in one example, the processor is configured to read the computer program and adjust the exposure time of the camera by executing the computer program to implement the method according to any one of embodiments 1 to 4.
In another example, a processor for reading the computer program and obtaining an adjustment parameter related to an exposure time T0_ DST to be exposed by executing the computer program, and adjusting a second segment exposure time T1_ DST of the exposure time T0_ DST according to the obtained adjustment parameter and a specified formula;
wherein the adjusting parameters at least comprise: a target exposure Gain _ DST, a target inflection point value Knee _ DST, a target pixel characteristic value MAX _ DST, a pixel characteristic maximum value MAX _ SRC of a previous frame image, an inflection point value Knee _ SRC, an exposure time T0_ SRC, and a ratio f _ SRC of a second period of exposure time T1_ in the exposure time to _ SRC in the exposure time T0_ SRC to be exposed;
the specified formula is:
Figure BDA0002317682200000131
or the following steps:
Figure BDA0002317682200000132
the f _ DST is T1_ DST at the time of the exposureThe ratio in time T0_ DST.
Thus, the description of the structure of the apparatus shown in fig. 8 is completed.
In embodiments of the present application, a machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The apparatuses, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (19)

1. An exposure adjustment method applied to a photographing apparatus, comprising:
acquiring a target exposure gain;
acquiring a target inflection value by using the target exposure gain;
determining a target pixel characteristic value according to the pixel characteristic information of at least one frame of shot image;
and adjusting the exposure time of the shooting device according to the target exposure gain, the target inflection point value and the target pixel characteristic value.
2. The method of claim 1, wherein the obtaining a target knee value using a target exposure gain comprises:
searching an inflection point value corresponding to the target exposure gain in the established mapping relation between the exposure gain and the inflection point value;
and determining the found inflection point value as the target inflection point value.
3. The method according to claim 1, wherein the determining a target pixel characteristic value according to the pixel characteristic information of the at least one captured frame of image comprises:
acquiring pixel characteristic information of at least one shot frame of image;
determining a state code grade for representing a target pixel characteristic value according to the pixel characteristic information;
and determining the characteristic value of the target pixel according to the state code grade.
4. The method of claim 3, wherein determining the target pixel eigenvalue at a state code level comprises:
searching a pixel characteristic value corresponding to the state code grade in the established mapping relation between the state code grade and the pixel characteristic value;
and determining the searched pixel characteristic value as the target pixel characteristic value.
5. The method of claim 3 or 4, wherein the target pixel characteristic value is a maximum gray value.
6. The method of claim 3, wherein the pixel characteristic information comprises at least: a pixel histogram of gray levels;
the determining the state code level for characterizing the target pixel feature value according to the pixel feature information comprises:
determining a pixel gray maximum value in the pixel histogram of the gray, and determining a target gray maximum value segment where the pixel gray maximum value is located from all divided gray maximum value segments;
determining a target gray segment among the divided gray segments;
counting the number of pixels with gray values positioned in the target gray segment from at least one shot frame image;
determining the state code level according to the number of pixels located in the target gray scale segment according to the target gray scale maximum value segment.
7. The method of claim 1, wherein the adjusting the exposure time of the camera according to the target exposure gain, the target inflection point value and the target pixel feature value comprises:
calculating the exposure time ratio to be exposed according to the target exposure gain, the target inflection point value and the target pixel characteristic value;
and adjusting the exposure time according to the exposure time ratio.
8. The method of claim 7,
the exposure time comprises a first exposure time and a second exposure time;
the exposure time ratio refers to the ratio of the second period of exposure time to the exposure time in the exposure time.
9. The method of claim 8, wherein adjusting the exposure time according to the exposure time ratio comprises:
and adjusting the second period of exposure time in the exposure time according to the exposure time ratio.
10. The method of claim 8 or 9, wherein the calculating the exposure time ratio to be exposed according to the target exposure gain, the target inflection point value and the target pixel characteristic value comprises:
determining a second exposure time in the exposure time according to the target exposure gain, the target pixel characteristic value and the target inflection point value;
and calculating the exposure time ratio according to the exposure time and the second period of exposure time.
11. The method of claim 1, wherein adjusting the exposure time to be exposed according to the target exposure gain, the target inflection values, and the target pixel feature values comprises:
calculating second-stage exposure time according to the target exposure gain, the target inflection value and the target pixel characteristic value;
and determining the calculated second exposure time as a second exposure time in the exposure times.
12. The method of claim 11, wherein determining the second one of the exposure times as a function of the target exposure gain, the target inflection values, and the target pixel feature values comprises:
calculating the actual brightness REAL _ DST corresponding to the exposure according to the target exposure gain;
and calculating a second period of exposure time by using the REAL _ DST, the target pixel characteristic value and the target inflection value.
13. The method of claim 12, wherein the calculating the REAL brightness REAL _ DST corresponding to the exposure according to the target exposure gain comprises:
determining the REAL brightness REAL _ SRC0 after the corresponding gain according to the exposure information corresponding to the shot at least one frame of image;
and calculating the actual brightness REAL _ DST corresponding to the exposure by using the REAL _ SRC0 and the target exposure gain.
14. The method of claim 13, wherein the exposure information comprises at least: a second exposure time T1_ SRC of the pixel characteristic maximum MAX _ SRC, the inflection point value Knee _ SRC, the exposure time T0_ SRC and the exposure time T0_ SRC of the previous frame image;
the determining the gained REAL brightness REAL _ SRC0 according to the exposure information corresponding to the captured at least one frame of image includes:
calculating a difference D1 between the MAX _ SRC and the Knee _ SRC;
calculating the REAL _ SRC0 according to the D1, the T1_ SRC, and the T0_ SRC.
15. The method of claim 13, wherein the calculating the REAL brightness REAL _ DST corresponding to the exposure by using REAL _ SRC0 and the target exposure gain comprises:
calculating Gain REAL brightness REAL _ SRC1 corresponding to the exposure information by using the REAL _ SRC0 and exposure Gain _ SRC corresponding to at least one frame of photographed image;
and calculating the REAL _ DST according to the REAL _ SRC1 and the target exposure gain.
16. The method of claim 15, wherein the calculating the REAL DST from REAL SRC and the target exposure gain comprises:
calculating a target REAL brightness REAL _ DST0 associated with time according to the REAL _ SRC1, the exposure time T0_ SRC and the exposure time T0_ DST to be exposed;
and calculating the REAL _ DST according to the REAL _ DST0 and the target exposure gain.
17. The method of claim 12, wherein the calculating the second exposure time using the REAL DST, the target pixel feature value, and the target inflection value comprises:
calculating a difference value D2 between the target pixel characteristic value and the target inflection value;
and calculating the second exposure time according to the D2, the exposure time and the REAL _ DST.
18. A photographing apparatus, characterized by comprising:
a computer-readable storage medium having stored thereon a computer program;
a processor for reading the computer program and adjusting the exposure time of the photographing apparatus by implementing the method of any one of claims 1 to 17 by executing the computer program.
19. A photographing apparatus, characterized by comprising:
a computer-readable storage medium having stored thereon a computer program;
a processor, configured to read the computer program, and obtain an adjustment parameter related to an exposure time T0_ DST to be exposed by executing the computer program, and adjust a second segment of exposure time T1_ DST in the exposure time T0_ DST according to the obtained adjustment parameter and a specified formula;
wherein the adjusting parameters at least comprise: a target exposure Gain _ DST, a target inflection point value Knee _ DST, a target pixel characteristic value MAX _ DST, a pixel characteristic maximum value MAX _ SRC of a previous frame image, an inflection point value Knee _ SRC, an exposure time T0_ SRC, and a ratio f _ SRC of a second period of exposure time T1_ in the exposure time to _ SRC in the exposure time T0_ SRC to be exposed;
the specified formula is:
Figure FDA0002919534350000041
or the following steps:
Figure FDA0002919534350000042
the f _ DST is a ratio of T1_ DST in the exposure time T0_ DST.
CN201880039804.9A 2018-08-30 2018-08-30 Exposure adjusting method and device Expired - Fee Related CN110786000B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/103210 WO2020042074A1 (en) 2018-08-30 2018-08-30 Exposure adjustment method and apparatus

Publications (2)

Publication Number Publication Date
CN110786000A CN110786000A (en) 2020-02-11
CN110786000B true CN110786000B (en) 2021-08-03

Family

ID=69383064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880039804.9A Expired - Fee Related CN110786000B (en) 2018-08-30 2018-08-30 Exposure adjusting method and device

Country Status (2)

Country Link
CN (1) CN110786000B (en)
WO (1) WO2020042074A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540042B (en) * 2020-04-28 2023-08-11 上海盛晃光学技术有限公司 Method, device and related equipment for three-dimensional reconstruction
CN113156408A (en) * 2021-03-19 2021-07-23 奥比中光科技集团股份有限公司 Contrast calibration method, device and equipment
CN113657427B (en) * 2021-06-29 2024-01-23 东风汽车集团股份有限公司 In-vehicle multi-source image fusion recognition method and device
CN113660413B (en) * 2021-07-26 2022-05-10 中国科学院西安光学精密机械研究所 Automatic exposure method for large-caliber large-view-field camera applied to aircraft
CN115514900B (en) * 2022-08-26 2023-11-07 中国科学院合肥物质科学研究院 Imaging spectrometer rapid automatic exposure imaging method and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126661A (en) * 2007-09-20 2008-02-20 北京中星微电子有限公司 Method and apparatus for determining ambient light
CN101394482A (en) * 2008-09-09 2009-03-25 任晓慧 CMOS image sensor and automatic exposure control method
CN101835002A (en) * 2009-03-13 2010-09-15 株式会社东芝 Image signal processing apparatus and image-signal processing method
CN104954696A (en) * 2014-03-27 2015-09-30 南京理工大学 Automatic EMCCD gain adjusting method
CN108121942A (en) * 2016-11-30 2018-06-05 南昌欧菲生物识别技术有限公司 A kind of method and device of fingerprint recognition

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03274973A (en) * 1990-03-26 1991-12-05 Sony Corp Automatic knee control circuit
CN101247479B (en) * 2008-03-26 2010-07-07 北京中星微电子有限公司 Automatic exposure method based on objective area in image
CN101262567B (en) * 2008-04-07 2010-12-08 北京中星微电子有限公司 Automatic exposure method and device
KR101492564B1 (en) * 2008-08-06 2015-03-06 삼성디스플레이 주식회사 Liquid crystal display apparatus and common voltage control method thereof
CN101873434A (en) * 2009-04-21 2010-10-27 安国国际科技股份有限公司 Image pick-up device and related method
CN104202537B (en) * 2014-09-05 2019-03-01 天彩电子(深圳)有限公司 A kind of light-dimming method of thermal camera
CN104580925A (en) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 Image brightness controlling method, device and camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126661A (en) * 2007-09-20 2008-02-20 北京中星微电子有限公司 Method and apparatus for determining ambient light
CN101394482A (en) * 2008-09-09 2009-03-25 任晓慧 CMOS image sensor and automatic exposure control method
CN101835002A (en) * 2009-03-13 2010-09-15 株式会社东芝 Image signal processing apparatus and image-signal processing method
CN104954696A (en) * 2014-03-27 2015-09-30 南京理工大学 Automatic EMCCD gain adjusting method
CN108121942A (en) * 2016-11-30 2018-06-05 南昌欧菲生物识别技术有限公司 A kind of method and device of fingerprint recognition

Also Published As

Publication number Publication date
WO2020042074A1 (en) 2020-03-05
CN110786000A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110786000B (en) Exposure adjusting method and device
CN108335279B (en) Image fusion and HDR imaging
US8989484B2 (en) Apparatus and method for generating high dynamic range image from which ghost blur is removed using multi-exposure fusion
CN104954771B (en) Carry out the image processing equipment and image processing method of color range correction
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
US9449376B2 (en) Image processing apparatus and image processing method for performing tone correction of an output image
US10298853B2 (en) Image processing apparatus, method of controlling image processing apparatus, and imaging apparatus
US10419684B2 (en) Apparatus and method for adjusting camera exposure
US9307162B2 (en) Local enhancement apparatus and method to generate high dynamic range images by blending brightness-preserved and brightness-adjusted blocks
RU2496250C1 (en) Image processing apparatus and method
US20140218559A1 (en) Image pickup apparatus, image processing apparatus, control method for image pickup apparatus, and image processing method
CN107993189B (en) Image tone dynamic adjustment method and device based on local blocking
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
CN110139020B (en) Image processing method and device
CN111601048B (en) Image processing method and device
CN110796041A (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
EP3179716B1 (en) Image processing method, computer storage medium, device, and terminal
US10863103B2 (en) Setting apparatus, setting method, and storage medium
CN112991163B (en) Panoramic image acquisition method, device and equipment
CN115082350A (en) Stroboscopic image processing method and device, electronic device and readable storage medium
CN112653845A (en) Exposure control method, exposure control device, electronic equipment and readable storage medium
WO2020107289A1 (en) Photographing method and apparatus, and unmanned aerial vehicle
JP4879363B1 (en) Image processing system
US10235742B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and non-transitory computer-readable storage medium for adjustment of intensity of edge signal
JP2013192057A (en) Imaging apparatus, control method of the same, and control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210803