CN113572973A - Exposure control method, device, equipment and computer storage medium - Google Patents

Exposure control method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN113572973A
CN113572973A CN202111141160.4A CN202111141160A CN113572973A CN 113572973 A CN113572973 A CN 113572973A CN 202111141160 A CN202111141160 A CN 202111141160A CN 113572973 A CN113572973 A CN 113572973A
Authority
CN
China
Prior art keywords
integration time
image
depth
value
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111141160.4A
Other languages
Chinese (zh)
Other versions
CN113572973B (en
Inventor
田照银
吴昊
刘德珩
明幼林
陈海飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Silicon Integrated Co Ltd
Original Assignee
Wuhan Silicon Integrated Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Silicon Integrated Co Ltd filed Critical Wuhan Silicon Integrated Co Ltd
Priority to CN202111141160.4A priority Critical patent/CN113572973B/en
Publication of CN113572973A publication Critical patent/CN113572973A/en
Application granted granted Critical
Publication of CN113572973B publication Critical patent/CN113572973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Abstract

The embodiment of the application discloses an exposure control method, an exposure control device, exposure control equipment and a computer storage medium, wherein the method comprises the following steps: acquiring a phase mean image, an amplitude image and a depth image corresponding to a current image frame; determining a first target block in the phase mean image and a second target block in the amplitude image; counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels; under the condition that the number of the counted pixels does not exceed the first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block; and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining the exposure parameter of the next image frame according to the target integration time. Therefore, the integration time can be adjusted to be optimal, so that the image accuracy is improved, and the image quality of the image is improved.

Description

Exposure control method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an exposure control method, apparatus, device, and computer storage medium.
Background
With the advancement Of science and technology, Time Of Flight (TOF) cameras have been developed at a rapid pace, and TOF cameras are widely used in many fields due to their unique performance. TOF cameras can actively transmit successive pulses of light to a target object and then use a sensor to receive the pulses of light reflected back from the target object, and derive the distance to the target object by detecting the time of flight (round trip) of the transmitted and received pulses of light. That is, the TOF camera can acquire not only a two-dimensional grayscale image of the target object but also a depth image of the target object.
For TOF cameras, the accuracy of the depth image is related to the exposure time in addition to the generation principle. However, the conventional exposure control method may cause the target object to be overexposed or underexposed, so that the accuracy of the depth image cannot be optimized.
Disclosure of Invention
The application provides an exposure control method, an exposure control device, exposure control equipment and a computer storage medium, which can adjust the integration time to be optimal, so that the image accuracy is improved, and meanwhile, the imaging quality of an image is improved.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an exposure control method, including:
acquiring a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
determining a first target block in the phase mean image and a second target block in the amplitude image;
counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels;
under the condition that the number of the statistical pixels does not exceed a first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block;
and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining exposure parameters of the next image frame according to the target integration time.
In a second aspect, an embodiment of the present application provides an exposure control apparatus, which includes an acquisition unit, a determination unit, a statistics unit, and a calculation unit; the acquiring unit is configured to acquire a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
the determining unit is configured to determine a first target block in the phase mean image and a second target block in the amplitude image;
the counting unit is configured to count the number of pixels outside a pixel threshold range of the first target block to obtain a counted pixel number;
the calculating unit is configured to acquire a depth block corresponding to the second target block from the depth image and calculate a depth mean value of the depth block when the number of the statistical pixels does not exceed a first threshold;
the determining unit is further configured to determine a target integration time corresponding to the depth mean value according to a corresponding relation between the integration time and the depth, and determine an exposure parameter of a next image frame according to the target integration time.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor; wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program, which when executed by at least one processor implements the method according to the first aspect.
The exposure control method, the device, the equipment and the computer storage medium provided by the embodiment of the application acquire a phase mean image, an amplitude image and a depth image corresponding to a current image frame; determining a first target block in the phase mean image and a second target block in the amplitude image; counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels; under the condition that the number of the statistical pixels does not exceed a first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block; and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining exposure parameters of the next image frame according to the target integration time. Therefore, under the condition that the number of the statistical pixels of the target area does not exceed the first threshold, the target area is not overexposed, and at the moment, the integration time can be adjusted to be optimal according to the corresponding relation between the integration time and the depth, so that the image achieves the optimal precision, and meanwhile, the imaging quality of the image is improved.
Drawings
Fig. 1 is a schematic flowchart of an exposure control method according to an embodiment of the present disclosure;
fig. 2 is a schematic detailed flowchart of an exposure control method according to an embodiment of the present disclosure;
FIG. 3A is a graph illustrating a comparison of energy curves at different integration times at 200mm and 300mm according to an embodiment of the present disclosure;
FIG. 3B is a graph illustrating a comparison of energy curves at 700mm and 800mm for different integration times, according to an embodiment of the present disclosure;
FIG. 3C is a graph illustrating a comparison of energy curves at 1400mm and 1500mm for different integration times, according to an embodiment of the present disclosure;
fig. 4 is a detailed flowchart of another exposure control method according to an embodiment of the present disclosure;
FIG. 5A is a schematic diagram of a phase mean image of a phase of 200mm under 100us according to an embodiment of the present disclosure;
FIG. 5B is a schematic diagram of a phase mean image of a phase of 200mm under 500us according to an embodiment of the present disclosure;
FIG. 5C is a schematic diagram of a phase mean image of a phase of 800mm under 1000us according to an embodiment of the present disclosure;
FIG. 5D is a schematic diagram of a phase mean image under the conditions of 800mm and 2000us according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an exposure control apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should also be noted that reference to the terms "first \ second \ third" in the embodiments of the present application is only used for distinguishing similar objects and does not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged with a specific order or sequence where possible so that the embodiments of the present application described herein can be implemented in an order other than that shown or described herein.
In an embodiment of the present application, referring to fig. 1, a flowchart of an exposure control method provided in an embodiment of the present application is shown. As shown in fig. 1, the method may include:
s101: and acquiring a phase mean image, an amplitude image and a depth image corresponding to the current image frame.
It should be noted that the exposure control method according to the embodiment of the present application is applied to an exposure control apparatus or an electronic device incorporating the apparatus. The electronic device may be, for example, a smart phone, a tablet computer, a notebook computer, a palm top computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a TOF camera, a video camera, and the like, without any limitation.
It should be further noted that the exposure parameters of the embodiments of the present application include one or more of exposure time, exposure gain, and aperture value. As for the exposure time, it can directly affect the accuracy of the depth image; wherein, the large integration time at a short distance will result in overexposure; the integration time is smaller at a long distance, which can cause under-exposure, and the distance measurement is inaccurate in both over-exposure and under-exposure. In the embodiment of the present application, the exposure time may also be referred to as "integration time", that is, the embodiment of the present application mainly solves the problem of overexposure of the target region due to too large integration time and the problem of underexposure of the target region due to too small integration time.
It should be further noted that, taking a TOF camera as an example, the current image frame is an image captured by the TOF camera at the current time. Wherein, when the depth image is determined, the corresponding amplitude image (or called as "intensity image") can also be determined. The depth image may comprise a number of pixels, each of which may represent a depth (or referred to as a "distance"); correspondingly, the amplitude image may also comprise several pixels, each of which may represent an amplitude or an intensity. In addition, for the phase mean image, the phase mean image may also include a plurality of pixels, and each pixel is obtained by performing mean calculation based on four phases; here, the four phases may include: 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
S102: a first target block in the phase mean image and a second target block in the amplitude image are determined.
It should be noted that after the phase mean image and the amplitude image are obtained, a target area (i.e., a first target block) in the phase mean image and a target area (i.e., a second target block) in the amplitude image need to be further determined.
In some embodiments, the determining the first target block in the phase mean image may include:
carrying out block division on the phase mean image to obtain a plurality of phase mean blocks;
calculating the standard difference value corresponding to each of the plurality of phase mean value blocks;
and determining the maximum standard deviation value from the standard deviation values, and taking the phase mean value block corresponding to the maximum standard deviation value as a first target block.
It should be noted that the phase mean image can be represented by phaseMean. The block division of the phase mean image may specifically include: dividing phaseMean into 40 × 40 pixel small blocks, wherein each pixel small block is a phase mean value block; and then, respectively calculating the standard difference value corresponding to each phase mean value block, selecting the maximum standard difference value from the standard difference values, and determining the phase mean value block corresponding to the maximum standard difference value as a first target block.
In some embodiments, the determining the second target block in the amplitude image may include:
carrying out block division on the amplitude image to obtain a plurality of amplitude blocks;
calculating the mean value corresponding to each of the plurality of amplitude blocks;
and determining the maximum mean value from the mean values, and taking the amplitude block corresponding to the maximum mean value as a second target block.
Note that the amplitude image may be represented by an ampl. The block division of the amplitude image may specifically include: dividing the ampl into 40 × 40 pixel small blocks, each pixel small block being an amplitude block; and then respectively calculating the average value corresponding to each amplitude block, selecting the maximum average value from the average values, and determining the amplitude block corresponding to the maximum average value as a second target block.
S103: and counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels.
It should be noted that the basis of the embodiment of the present application is that the phase mean of the overexposed pixel is more deviated 2048 than the normal exposure pixel, and here, the pixel threshold may be set to 340, i.e., the embodiment of the present application may be set to 2048 ± 340; therefore, the pixel threshold range can be 2048-340-2048 + 340.
In this way, the number of pixels outside the pixel threshold range is counted for the first target block, and the counted pixel number can be obtained, so that the comparison between the counted pixel number and the first threshold value can be performed subsequently.
S104: and under the condition that the number of the counted pixels does not exceed the first threshold, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block.
S105: and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining the exposure parameter of the next image frame according to the target integration time.
The first threshold value represents a determination value for measuring whether or not the first target block is overexposed. In the embodiment of the present application, the value of the first threshold is set according to an actual situation. Alternatively, the first threshold may be set to 10, but is not limited in any way.
It should be further noted that, since the first target block is a block with the largest standard deviation value in the phase mean image, if the number of statistical pixels of the first target block does not exceed the first threshold at this time, which means that the number of statistical pixels of all blocks in the phase mean image does not exceed the first threshold, it may be indicated that an overexposed area does not exist in the phase mean image; otherwise, if the statistical pixel count of the first target block exceeds the first threshold, it may indicate that an overexposed region exists in the phase mean image.
It should be further noted that, since the second target block is a block with the largest mean value in the amplitude image, a depth block corresponding to the block may be selected at this time, because the block with the largest mean value is usually the block closest to the camera in the scene, that is, the region of interest, on the premise of no overexposure. That is to say, when the number of statistical pixels does not exceed the first threshold, at this time, because there is no overexposure area in the phase mean image, the depth block corresponding to the second target block may be obtained from the depth image, and the depth mean of the depth block is calculated; and then, according to the corresponding relation between the integration time and the depth, determining the target integration time corresponding to the depth mean value to obtain the exposure parameter of the next image frame.
In this embodiment of the present application, the depth image may be represented by dist, the current integration time corresponding to the current image frame may be represented by currInt, the maximum mean of the amplitude (i.e., the second target block) may be represented by currAmpl, and the mean of the depth block corresponding to currAmpl is represented by currist.
In addition, in the embodiment of the application, the integration time and the depth have a corresponding relationship, and the depth and the amplitude also have a corresponding relationship; therefore, the integration time has a corresponding relationship with the depth and the amplitude, and the corresponding relationship between the integration time and the depth can be represented by opt IntegTimeAmplList. Thus, in addition to finding the corresponding target integration time (denoted by optInt), the target amplitude value (denoted by optAmpl) can be found in optIntegTimeAmplList according to currList. Thus, after the target integration time is obtained, the exposure parameters of the next image frame can be determined.
It should be further noted that, when an overexposed region exists in the phase mean image, the preset minimum integration time may be directly used as the target integration time. Thus, in some embodiments, the method may further comprise:
and under the condition that the number of the counted pixels exceeds a first threshold value, determining preset minimum integration time as target integration time, and determining exposure parameters of the next image frame according to the target integration time.
In the embodiment of the present application, the preset minimum integration time may be 100 microseconds (us), but is not limited herein.
In the embodiment of the present application, if the statistical pixel number of the first target block exceeds the first threshold, it may indicate that an overexposed region exists in the phase mean image, at this time, the target integration time may be set to be equal to 100us, and then the exposure parameter of the next image frame may be determined according to the target integration time.
In addition, for the corresponding relationship between the integration time and the depth, the embodiment of the present application may first calculate the optimal integration time at different distances, so as to construct the corresponding relationship between the integration time and the depth. Thus, in some embodiments, the method may further comprise:
obtaining an amplitude sample image and a depth sample image which correspond to a plurality of integration times under a target depth respectively;
calculating energy values corresponding to different integration times according to the amplitude sample diagram and the depth sample diagram;
performing gradient calculation on the current integration time according to the energy value to obtain a gradient value corresponding to the current integration time;
if the gradient value corresponding to the current integration time meets the preset condition, determining the optimal integration time corresponding to the target depth according to the calculation between the current integration time and a first preset value;
and after the optimal integration time corresponding to each at least one depth is obtained, establishing the corresponding relation between the integration time and the depth.
In the embodiment of the present application, the energy value is not energy in the actual physical sense, and specifically refers to the product of (the amplitude value in the amplitude sample map) and (the depth value ^2 in the depth sample map).
It should be further noted that the basis of the embodiment of the present application is obtained according to statistical data, and the process may be regarded as a calibration process, and can be applied to the same batch of chips only by calibrating once.
It should be further noted that, taking a certain distance (i.e. a target depth) as an example, an amplitude sample map and a depth sample map at several different integration times may be obtained; calculating the energy value of each pixel point, wherein the energy value of the pixel point can be equal to the amplitude value of the pixel point, the depth value of the pixel point and the depth value of the pixel point, and then the energy sum (represented by sumEnergy) of the whole image can be obtained; energy values corresponding to different integration times can be calculated, and the energy values can be stored in sumEnergyAll; wherein, the energy value corresponding to the first integration time is stored in the first position in sumEnergyAll and is represented by sumEnergyAll (1); the energy value corresponding to the ith integration time is stored in the ith position in sumEnergyAll and is represented by sumEnergyAll (i); the energy value corresponding to the i +1 th integration time is stored in the i +1 th position in sumEnergyAll, and is expressed by sumEnergyAll (i +1), and i is an integer greater than or equal to 2.
In addition, it should be noted that, the correspondence between the integration time and the depth may be obtained in other ways besides the above way, for example, the correspondence between the integration time and the depth may be obtained through amplitude curves corresponding to different integration times at different distances, and the present disclosure is not limited to the way of the embodiment of the present disclosure.
Further, in some embodiments, after obtaining energy values corresponding to different integration times, performing gradient calculation on the current integration time according to the energy values to obtain a gradient value corresponding to the current integration time may include:
under the condition that the current integration time is the ith integration time, determining an energy value corresponding to the ith integration time and an energy value corresponding to the (i +1) th integration time;
subtracting the energy value corresponding to the (i +1) th integration time from the energy value corresponding to the ith integration time to obtain a gradient value corresponding to the ith integration time, and taking the gradient value corresponding to the ith integration time as the gradient value corresponding to the current integration time; wherein i is an integer greater than or equal to 2.
That is, taking the ith integration time as the current integration time as an example, the gradient value corresponding to the ith integration time can be represented by energygradient (i). Here, the gradient value corresponding to the i-th integration time is equal to sumEnergyAll (i +1) -sumEnergyAll (i), that is, the gradient value corresponding to the current integration time is obtained.
Further, in some embodiments, after obtaining the gradient value corresponding to the ith integration time, the method may further include:
determining an energy value corresponding to the first integration time, and performing subtraction calculation on the energy value corresponding to the ith integration time and the energy value corresponding to the first integration time to obtain a first intermediate value;
performing division calculation on the first intermediate value and (i-1) to obtain a second intermediate value;
comparing the gradient value corresponding to the ith integral time with a second intermediate value of a preset multiple;
and if the gradient value corresponding to the ith integral time is smaller than a second intermediate value of the preset multiple, determining that the gradient value corresponding to the current integral time meets the preset condition.
It should be noted that, as to whether the gradient value corresponding to the current integration time meets the preset condition, whether the gradient value corresponding to the ith integration time is smaller than the preset gradient value may be used for the determination. Wherein the preset gradient value may be equal to a second intermediate value of the preset multiple, the second intermediate value being equal to (sumEnergyAll (i) -sumEnergyAll (1))/(i-1). In the embodiment of the present application, the preset multiple may be set according to actual conditions, and optionally, the preset multiple may be set to 0.88, but is not limited in any way.
It should be further noted that, if the gradient value corresponding to the ith integration time is smaller than the second intermediate value of the preset multiple, it may be determined that the gradient value corresponding to the current integration time satisfies the preset condition, and at this time, the optimal integration time corresponding to the target depth may be determined according to the calculation between the current integration time and the first preset value.
In the embodiment of the present application, the first preset value may be set according to an actual situation, and optionally, the first preset value may be set to 100us, but is not limited in any way. Thus, for example, 100us, the optimal integration time for the target depth may be equal to (current integration time-100 us).
Further, in some embodiments, after comparing the gradient value corresponding to the ith integration time with the second intermediate value of the preset multiple, the method may further include:
and if the gradient value corresponding to the ith integration time is not less than a second intermediate value of the preset multiple, executing an operation of adding 1 to the i, and returning to execute the step of subtracting the energy value corresponding to the (i +1) th integration time and the energy value corresponding to the first integration time to obtain the gradient value corresponding to the ith integration time.
It should be noted that, if the gradient value corresponding to the ith integration time is not less than the second intermediate value of the preset multiple, it may be determined that the gradient value corresponding to the current integration time does not satisfy the preset condition, at this time, 1 addition operation may be performed on i, and subtraction calculation is performed on the energy value corresponding to the (i +1) th integration time and the energy value corresponding to the first integration time to obtain the gradient value corresponding to the ith integration time, until the optimal integration time corresponding to the target depth is determined under the condition that the gradient value corresponding to the current integration time satisfies the preset condition.
It should be noted that the integration time is not infinitely increased, and when the integration time is too long, an overexposure phenomenon is also likely to occur. Therefore, in an embodiment of the present application, the method may further include: and setting the upper limit value of the optimal integration time as a second preset value and the lower limit value of the optimal integration time as a third preset value.
In the embodiment of the present application, the second preset value and the third preset value may be set according to actual situations, and optionally, the second preset value may be set to 2000us, and the third preset value may be set to 100us, but is not limited in any way. Thus, the optimal integration time is usually limited to 100us to 2000 us.
That is, determining an optimal integration time at the target distance, wherein the current integration time may gradually increase from 50us, 100us, 150us, 200us, 250us, and so on until a gradient value corresponding to the current integration time meets a preset condition; then, subtraction operation is carried out according to the current integration time and a first preset value, and the optimal integration time is obtained; but the optimal integration time is limited to be between 100us and 2000us so as to obtain the final optimal integration time. In other words, if the obtained optimal integration time is less than 100us, 100us may be determined as the optimal integration time corresponding to the target depth at this time; if the obtained optimal integration time is greater than 2000us, 2000us can be determined as the optimal integration time corresponding to the target depth.
Thus, according to the above specific steps, after obtaining the optimal integration time corresponding to each of the at least one depth at different distances, the corresponding relationship between the integration time and the depth can be established; and then, under the condition that no overexposure area exists in the phase mean value image, according to the corresponding relation between the integration time and the depth, determining the target integration time corresponding to the depth mean value to obtain the exposure parameter of the next image frame.
The embodiment provides an exposure control method, which includes acquiring a phase mean image, an amplitude image and a depth image corresponding to a current image frame; determining a first target block in the phase mean image and a second target block in the amplitude image; counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels; under the condition that the number of the statistical pixels does not exceed a first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block; and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining exposure parameters of the next image frame according to the target integration time. Therefore, under the condition that the number of the statistical pixels of the target area does not exceed the first threshold, the target area is not overexposed, the integration time can be adjusted to be optimal according to the corresponding relation between the integration time and the depth, and the integration time can be adjusted to be optimal according to the corresponding relation between the integration time and the depth, so that the image achieves the optimal precision, and meanwhile, the imaging quality of the image is improved.
In another embodiment of the present application, based on the same inventive concept of the foregoing embodiment, the technical solution of the embodiment of the present application mainly solves the problem of overexposure of the target region caused by an excessively long integration time; and determining the optimal integration time corresponding to the target area to achieve the optimal accuracy.
For determining the optimal integration time at different distances, refer to fig. 2, which shows a detailed flowchart of an exposure control method provided in an embodiment of the present application. The detailed process may include:
s201: and obtaining an amplitude sample image and a depth sample image corresponding to different integration times at the current distance.
S202: and calculating energy values corresponding to different integration times based on the amplitude sample map and the depth sample map.
S203: the initial value of i is set to 2.
S204: and performing gradient calculation on the ith integration time according to the energy value to obtain a gradient value corresponding to the ith integration time.
S205: and judging whether the gradient value corresponding to the ith integration time meets a preset condition or not.
S206: if the determination result is negative, the operation of i = i +1 is executed, and the process returns to the step S204.
S207: if the judgment result is yes, the ith integration time and 100us are subtracted to obtain the optimal integration time.
S208: when the optimal integration time is limited to 100us to 2000us, the optimal integration time at the current distance is output.
It should be noted that, based on the detailed flow of fig. 2, the basis of the embodiment of the present application is shown according to the statistical data, and this process is a calibration flow, and can be applied to the same batch of chips only by calibrating once.
It should also be noted that when overexposure occurs, the energy gradient will gradually decrease, as shown in detail in the energy curve examples of fig. 3A, 3B and 3C. Here, fig. 3A shows a schematic diagram comparing energy curves at different integration times of 200mm and 300mm provided by the embodiment of the present application; wherein, the solid line with the star is the energy value corresponding to different integration time under the distance of 200mm, and the dotted line with the circle is the energy value corresponding to different integration time under the distance of 300 mm. FIG. 3B is a graph showing a comparison of energy curves at 700mm and 800mm for different integration times, provided by an embodiment of the present application; wherein, the solid line with the star is the energy value corresponding to different integration time under the distance of 700mm, and the dotted line with the circle is the energy value corresponding to different integration time under the distance of 800 mm. FIG. 3C is a graph showing a comparison of energy curves at 1400mm and 1500mm for different integration times, provided by an embodiment of the present application; wherein, the solid line with the star is the energy value corresponding to different integration time under 1400mm distance, and the dotted line with the circle is the energy value corresponding to different integration time under 1500mm distance.
It should be further noted that, in the embodiment of the present application, the input of the amplitude sample map and the depth sample map is corresponding to different integration times at the same distance; taking a certain distance (i.e. the current depth) as an example, calculating the energy value of each pixel, where the energy value of the pixel may be equal to the amplitude value of the pixel, the depth value of the pixel, and then obtaining the sum of the energies of the entire image (expressed by sumEnergy); energy values corresponding to different integration times can be calculated, and the energy values can be stored in sumEnergyAll; wherein, the energy value corresponding to the first integration time is stored in the first position in sumEnergyAll and is represented by sumEnergyAll (1); the energy value corresponding to the ith integration time is stored in the ith position in sumEnergyAll and is represented by sumEnergyAll (i); the energy value corresponding to the i +1 th integration time is stored in the i +1 th position in sumEnergyAll, and is expressed by sumEnergyAll (i +1), and i is an integer greater than or equal to 2. Then, gradient values (expressed by energyGradient) are obtained according to energy to integration time, specifically, the gradient value corresponding to the ith integration time can be expressed by energyGradient (i), which is equal to sumEnergyAll (i +1) -sumEnergyAll (i); then judging whether energyGradient (i) is less than 0.88 × (sumEnergyAll (i) -sumEnergyAll (1))/(i-1); if the judgment result is negative, calculating the gradient value corresponding to the next integration time, and executing the judgment step again; if the judgment result is yes, the optimal integration time is the current integration time (i.e. ith integration time) -100 us. In the embodiment of the application, the optimal integration time is limited to 100 uss-2000 us, and the optimal integration time at the current distance is finally output.
For example, refer to table 1, which shows an exemplary table of correspondence between distances, integration times, and amplitudes provided by the embodiments of the present application. As shown in Table 1, the range of distance (i.e., depth) is 200-1500 (in mm) and the range of integration time is 100-2000 (in us), where the optimal integration time and corresponding amplitude information at different distances under two test conditions are shown.
TABLE 1
Figure 468517DEST_PATH_IMAGE001
For the automatic exposure schemes at different distances, refer to fig. 4, which shows a detailed flow chart of another exposure control method provided by the embodiment of the present application. The detailed process may include:
s401: and determining the corresponding relation among the phase mean image, the amplitude image, the depth image, the integration time and the depth.
S402: and respectively dividing the phase mean image and the amplitude image into a plurality of blocks, and respectively calculating the standard difference value of each block in the phase mean image and the mean value of each block in the amplitude image.
S403: and determining a first target block corresponding to the maximum standard deviation value in the phase mean image and a second target block corresponding to the maximum mean value in the amplitude image.
S404: and counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels.
S405: and judging whether the number of the statistical pixels exceeds a first threshold value.
S406: and if the judgment result is negative, obtaining the depth block corresponding to the second target block from the depth image, and calculating the depth mean value of the depth block.
S407: and determining the target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth.
S408: if the judgment result is yes, the target integration time is set to be 100 us.
S409: and outputting the exposure parameter of the next image frame according to the target integration time.
It should be noted that, based on the detailed flow of fig. 4, the embodiment of the present application is based on that the phase mean of the overexposed pixel is more deviated 2048 compared to the normally exposed pixel, and the pixel threshold is set to 340; the depth block corresponding to the block with the largest amplitude mean value is selected from the depth image because the block with the largest amplitude mean value is usually located at the closest distance in the scene, i.e. the region of interest, on the premise of no overexposure.
It should be further noted that, for a phase mean image, fig. 5A shows a schematic diagram of a phase mean image at 200mm and 100us provided by the embodiment of the present application, fig. 5B shows a schematic diagram of a phase mean image at 200mm and 500us provided by the embodiment of the present application, fig. 5C shows a schematic diagram of a phase mean image at 800mm and 1000us provided by the embodiment of the present application, and fig. 5D shows a schematic diagram of a phase mean image at 800mm and 2000us provided by the embodiment of the present application. Here, the phase-mean image may include several pixels, each of which is obtained by performing mean calculation based on four phases, such as 0 degrees, 90 degrees, 180 degrees, and 270 degrees. In addition, it should be noted that fig. 5A is a phase mean image when the distance is 200mm and the distance is not overexposed, and fig. 5B is a phase mean image when the distance is 200mm and the overexposed; fig. 5C is a phase mean image at a distance of 800mm without overexposure, and fig. 5D is a phase mean image at a distance of 800mm with overexposure.
It should be further noted that, in the embodiment of the present application, the inputs thereof include: current integration time (denoted currInt), phase mean image (denoted phaseMean), amplitude image (denoted ampli), depth image (denoted dist), correspondence between integration time and depth (denoted optintegtimeamplilist); then dividing the phaseMean and the ampl into 40 multiplied by 40 pixel small blocks, respectively calculating the standard difference value of each pixel small block in the phaseMean and the mean value of each pixel small block in the ampl, and selecting the block with the maximum standard difference value in the phaseMean and the block with the maximum mean value in the ampl; then, counting the number of pixels of the maximum standard difference block in the phaseMean, which is out of the range of the pixel threshold value, so as to obtain the counted number of pixels; if the number of statistical pixels exceeds a first threshold (e.g., 10), then 100us may be determined as the target integration time; otherwise, if the number of the statistical pixels does not exceed the first threshold, determining that overexposure does not exist, then selecting the depth block corresponding to the maximum ampl mean block, and calculating the depth mean (represented by currdest); and selecting corresponding integration time optInt from the optIntegTimeAmplList according to the depth mean value, and determining the optInt as the target integration time.
The embodiment provides an exposure control method, and the specific implementation of the foregoing embodiment is explained in detail through the foregoing embodiment, and it can be seen that in the technical solution of the embodiment of the present application, the optimal integration time at different distances is specifically calculated through an energy gradient, and whether overexposure occurs in a target region is determined through a four-phase mean value; when overexposure does not occur, the integration time can be adjusted to be optimal, so that the image can reach the optimal precision; when overexposure occurs, the integration time can be adjusted to be optimal only by two frames at the moment; therefore, the image can reach the optimal precision, and the imaging quality of the image is improved.
In another embodiment of the present application, based on the same inventive concept as the previous embodiment, referring to fig. 6, a schematic structural diagram of an exposure control apparatus 60 according to an embodiment of the present application is shown. As shown in fig. 6, the exposure control device 60 may include: an acquisition unit 601, a determination unit 602, a statistic unit 603, and a calculation unit 604; the acquiring unit 601 is configured to acquire a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
a determining unit 602 configured to determine a first target block in the phase mean image and a second target block in the amplitude image;
a counting unit 603 configured to count the number of pixels outside the pixel threshold range of the first target block to obtain a counted number of pixels;
a calculating unit 604 configured to acquire a depth block corresponding to the second target block from the depth image and calculate a depth mean of the depth block when the number of statistical pixels does not exceed the first threshold;
the determining unit 602 is further configured to determine a target integration time corresponding to the depth mean value according to the correspondence between the integration time and the depth, and determine an exposure parameter of a next image frame according to the target integration time.
In some embodiments, the determining unit 602 is further configured to determine a preset minimum integration time as the target integration time and determine the exposure parameter of the next image frame according to the target integration time if the number of statistical pixels exceeds a first threshold.
In some embodiments, the determining unit 602 is further configured to perform block division on the phase mean image to obtain a plurality of phase mean blocks; calculating the standard difference value corresponding to each of the plurality of phase mean value blocks; and determining the maximum standard deviation value from the standard deviation values, and taking the phase mean value block corresponding to the maximum standard deviation value as the first target block.
In some embodiments, the determining unit 602 is further configured to perform block division on the amplitude image to obtain a plurality of amplitude blocks; calculating the mean value corresponding to each of the plurality of amplitude blocks; and determining a maximum mean value from the mean values, and taking an amplitude block corresponding to the maximum mean value as the second target block.
In some embodiments, each pixel point in the phase mean image is obtained by performing mean calculation based on four phases; wherein the four phases include: 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
In some embodiments, referring to fig. 6, the exposure control device 60 may further include a setup unit 605; the obtaining unit 601 is further configured to obtain an amplitude sample map and a depth sample map corresponding to a plurality of integration times under a target depth;
a calculating unit 604, further configured to calculate energy values corresponding to different integration times according to the amplitude sample map and the depth sample map; performing gradient calculation on the current integration time according to the energy value to obtain a gradient value corresponding to the current integration time;
a determining unit 602, further configured to determine, if the gradient value corresponding to the current integration time meets a preset condition, an optimal integration time corresponding to the target depth according to calculation between the current integration time and a first preset value;
the establishing unit 605 is configured to establish a corresponding relationship between the integration time and the depth after obtaining the optimal integration time corresponding to each of the at least one depth.
In some embodiments, the determining unit 602 is further configured to determine, if the current integration time is the ith integration time, an energy value corresponding to the ith integration time and an energy value corresponding to the (i +1) th integration time;
the calculating unit 604 is further configured to subtract the energy value corresponding to the i +1 th integration time from the energy value corresponding to the i-th integration time to obtain a gradient value corresponding to the i-th integration time, and use the gradient value corresponding to the i-th integration time as the gradient value corresponding to the current integration time; wherein i is an integer greater than or equal to 2.
In some embodiments, the determining unit 602 is further configured to determine an energy value corresponding to the first integration time;
the calculating unit 604 is further configured to subtract the energy value corresponding to the ith integration time from the energy value corresponding to the first integration time to obtain a first intermediate value; and performing division calculation on the first intermediate value and (i-1) to obtain a second intermediate value; comparing the gradient value corresponding to the ith integration time with the second intermediate value of a preset multiple;
the determining unit 602 is further configured to determine that the gradient value corresponding to the current integration time satisfies a preset condition if the gradient value corresponding to the ith integration time is smaller than the second intermediate value of a preset multiple.
In some embodiments, the determining unit 602 is further configured to, if the gradient value corresponding to the ith integration time is not less than the second intermediate value of the preset multiple, perform a 1 addition operation on i, and return to the step of performing the subtraction calculation on the energy value corresponding to the (i +1) th integration time and the energy value corresponding to the first integration time to obtain the gradient value corresponding to the ith integration time.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing a computer program which, when executed by at least one processor, performs the steps of the method of any of the preceding embodiments.
Based on the composition of the exposure control device 60 and the computer storage medium, refer to fig. 7, which shows a schematic structural diagram of an electronic apparatus provided in an embodiment of the present application. As shown in fig. 7, the electronic device 70 may include: a communication interface 701, a memory 702, and a processor 703; the various components are coupled together by a bus system 704. It is understood that the bus system 704 is used to enable communications among the components. The bus system 704 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled in fig. 7 as the bus system 704. The communication interface 701 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 702 for storing a computer program capable of running on the processor 703;
a processor 703 for executing, when running the computer program, the following:
acquiring a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
determining a first target block in the phase mean image and a second target block in the amplitude image;
counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels;
under the condition that the number of the counted pixels does not exceed the first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block;
and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining the exposure parameter of the next image frame according to the target integration time.
It will be appreciated that the memory 702 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous chained SDRAM (Synchronous link DRAM, SLDRAM), and Direct memory bus RAM (DRRAM). The memory 702 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The processor 703 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method may be implemented by hardware integrated logic circuits in the processor 703 or by instructions in the form of software. The Processor 703 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 703 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 703 is further configured to, when running the computer program, perform the steps of the method of any of the preceding embodiments.
Optionally, in some embodiments, refer to fig. 8, which shows a schematic structural diagram of another electronic device 70 provided in the embodiments of the present application. As shown in fig. 8, the electronic device 70 may include at least the exposure control apparatus 60 described in any of the foregoing embodiments.
In the embodiment of the present application, for the electronic device 70, when the number of statistical pixels in the target region exceeds the first threshold, it means that overexposure occurs in the target region, and at this time, only two frames are needed to adjust the integration time to the optimal integration time; under the condition that the number of the statistical pixels in the target area does not exceed the first threshold, the target area is not overexposed, and at the moment, the integration time can be adjusted to the optimal integration time according to the corresponding relation between the integration time and the depth, so that the image achieves the optimal accuracy, and meanwhile, the imaging quality of the image is improved.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An exposure control method, characterized in that the method comprises:
acquiring a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
determining a first target block in the phase mean image and a second target block in the amplitude image;
counting the number of pixels outside the pixel threshold range of the first target block to obtain the number of counted pixels;
under the condition that the number of the statistical pixels does not exceed a first threshold value, acquiring a depth block corresponding to the second target block from the depth image, and calculating a depth mean value of the depth block;
and determining target integration time corresponding to the depth mean value according to the corresponding relation between the integration time and the depth, and determining exposure parameters of the next image frame according to the target integration time.
2. The method of claim 1, wherein after obtaining the statistical number of pixels, the method further comprises:
and under the condition that the number of the statistical pixels exceeds a first threshold value, determining preset minimum integration time as the target integration time, and determining the exposure parameter of the next image frame according to the target integration time.
3. The method of claim 1, wherein determining the first target block in the phase-mean image comprises:
carrying out block division on the phase mean value image to obtain a plurality of phase mean value blocks;
calculating the standard difference value corresponding to each of the plurality of phase mean value blocks;
and determining the maximum standard deviation value from the standard deviation values, and taking the phase mean value block corresponding to the maximum standard deviation value as the first target block.
4. The method of claim 1, wherein determining the second target block in the magnitude image comprises:
carrying out block division on the amplitude image to obtain a plurality of amplitude blocks;
calculating the mean value corresponding to each of the plurality of amplitude blocks;
and determining the maximum mean value from the mean values, and taking an amplitude block corresponding to the maximum mean value as the second target block.
5. The method of claim 1, wherein each pixel in the phase mean image is obtained by performing a mean calculation based on four phases; wherein the four phases include: 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
6. The method according to any one of claims 1 to 5, further comprising:
obtaining an amplitude sample image and a depth sample image which correspond to a plurality of integration times under a target depth respectively;
calculating energy values corresponding to different integration times according to the amplitude sample map and the depth sample map;
performing gradient calculation on the current integration time according to the energy value to obtain a gradient value corresponding to the current integration time;
if the gradient value corresponding to the current integration time meets a preset condition, determining the optimal integration time corresponding to the target depth according to the calculation between the current integration time and a first preset value;
and after the optimal integration time corresponding to each at least one depth is obtained, establishing the corresponding relation between the integration time and the depth.
7. The method according to claim 6, wherein performing gradient calculation on the current integration time according to the energy value to obtain a gradient value corresponding to the current integration time comprises:
under the condition that the current integration time is the ith integration time, determining an energy value corresponding to the ith integration time and an energy value corresponding to the (i +1) th integration time;
subtracting the energy value corresponding to the (i +1) th integration time from the energy value corresponding to the ith integration time to obtain a gradient value corresponding to the ith integration time, and taking the gradient value corresponding to the ith integration time as the gradient value corresponding to the current integration time; wherein i is an integer greater than or equal to 2.
8. The method of claim 7, wherein after obtaining the gradient value corresponding to the ith integration time, the method further comprises:
determining an energy value corresponding to a first integration time, and performing subtraction calculation on the energy value corresponding to the ith integration time and the energy value corresponding to the first integration time to obtain a first intermediate value;
performing division calculation on the first intermediate value and (i-1) to obtain a second intermediate value;
comparing the gradient value corresponding to the ith integral time with the second intermediate value of a preset multiple;
and if the gradient value corresponding to the ith integral time is smaller than the second intermediate value of a preset multiple, determining that the gradient value corresponding to the current integral time meets a preset condition.
9. The method according to claim 8, wherein after comparing the gradient value corresponding to the i-th integration time with the second intermediate value of a preset multiple, the method further comprises:
and if the gradient value corresponding to the ith integration time is not less than the second intermediate value of the preset multiple, executing an operation of adding 1 to i, and returning to the step of executing the subtraction calculation of the energy value corresponding to the (i +1) th integration time and the energy value corresponding to the first integration time to obtain the gradient value corresponding to the ith integration time.
10. An exposure control apparatus characterized by comprising an acquisition unit, a determination unit, a statistic unit, and a calculation unit; wherein the content of the first and second substances,
the acquiring unit is configured to acquire a phase mean image, an amplitude image and a depth image corresponding to a current image frame;
the determining unit is configured to determine a first target block in the phase mean image and a second target block in the amplitude image;
the counting unit is configured to count the number of pixels outside a pixel threshold range of the first target block to obtain a counted pixel number;
the calculating unit is configured to acquire a depth block corresponding to the second target block from the depth image and calculate a depth mean value of the depth block when the number of the statistical pixels does not exceed a first threshold;
the determining unit is further configured to determine a target integration time corresponding to the depth mean value according to a corresponding relation between the integration time and the depth, and determine an exposure parameter of a next image frame according to the target integration time.
11. An electronic device, comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program operable on the processor;
the processor, when running the computer program, is configured to perform the method of any of claims 1 to 9.
12. A computer storage medium, characterized in that the computer storage medium stores a computer program which, when executed by at least one processor, implements the method of any one of claims 1 to 9.
CN202111141160.4A 2021-09-28 2021-09-28 Exposure control method, device, equipment and computer storage medium Active CN113572973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111141160.4A CN113572973B (en) 2021-09-28 2021-09-28 Exposure control method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111141160.4A CN113572973B (en) 2021-09-28 2021-09-28 Exposure control method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113572973A true CN113572973A (en) 2021-10-29
CN113572973B CN113572973B (en) 2021-12-17

Family

ID=78174860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111141160.4A Active CN113572973B (en) 2021-09-28 2021-09-28 Exposure control method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113572973B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785963A (en) * 2022-06-22 2022-07-22 武汉市聚芯微电子有限责任公司 Exposure control method, terminal and storage medium
CN115134492A (en) * 2022-05-31 2022-09-30 北京极豪科技有限公司 Image acquisition method, electronic device and computer readable medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050264684A1 (en) * 2004-05-31 2005-12-01 Konica Minolta Holdings, Inc. Image sensing apparatus
JP2012222068A (en) * 2011-04-06 2012-11-12 Advantest Corp Electron beam exposure device and electron beam exposure method
JP2013254165A (en) * 2012-06-08 2013-12-19 Canon Inc Pattern forming method
CN107707838A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN108848320A (en) * 2018-07-06 2018-11-20 京东方科技集团股份有限公司 Depth detection system and its exposure time adjusting method
CN108965732A (en) * 2018-08-22 2018-12-07 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN112073646A (en) * 2020-09-14 2020-12-11 哈工大机器人(合肥)国际创新研究院 Method and system for TOF camera long and short exposure fusion
CN112153297A (en) * 2019-06-27 2020-12-29 浙江大华技术股份有限公司 Exposure adjusting method and device, and storage device
EP3782551A1 (en) * 2019-08-23 2021-02-24 Koninklijke Philips N.V. System for x-ray dark-field, phase contrast and attenuation image acquisition
CN113016175A (en) * 2020-07-20 2021-06-22 深圳市大疆创新科技有限公司 Method, system, movable platform and storage medium for determining exposure parameters of main camera device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050264684A1 (en) * 2004-05-31 2005-12-01 Konica Minolta Holdings, Inc. Image sensing apparatus
JP2012222068A (en) * 2011-04-06 2012-11-12 Advantest Corp Electron beam exposure device and electron beam exposure method
JP2013254165A (en) * 2012-06-08 2013-12-19 Canon Inc Pattern forming method
CN107707838A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN108848320A (en) * 2018-07-06 2018-11-20 京东方科技集团股份有限公司 Depth detection system and its exposure time adjusting method
CN108965732A (en) * 2018-08-22 2018-12-07 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN112153297A (en) * 2019-06-27 2020-12-29 浙江大华技术股份有限公司 Exposure adjusting method and device, and storage device
EP3782551A1 (en) * 2019-08-23 2021-02-24 Koninklijke Philips N.V. System for x-ray dark-field, phase contrast and attenuation image acquisition
CN113016175A (en) * 2020-07-20 2021-06-22 深圳市大疆创新科技有限公司 Method, system, movable platform and storage medium for determining exposure parameters of main camera device
CN112073646A (en) * 2020-09-14 2020-12-11 哈工大机器人(合肥)国际创新研究院 Method and system for TOF camera long and short exposure fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134492A (en) * 2022-05-31 2022-09-30 北京极豪科技有限公司 Image acquisition method, electronic device and computer readable medium
CN115134492B (en) * 2022-05-31 2024-03-19 北京极光智芯科技有限公司 Image acquisition method, electronic device, and computer-readable medium
CN114785963A (en) * 2022-06-22 2022-07-22 武汉市聚芯微电子有限责任公司 Exposure control method, terminal and storage medium

Also Published As

Publication number Publication date
CN113572973B (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113572973B (en) Exposure control method, device, equipment and computer storage medium
US10990825B2 (en) Image processing method, electronic device and computer readable storage medium
US7509038B2 (en) Determining maximum exposure time to limit motion blur during image capture
CN106067953B (en) Method and apparatus for determining lens shading correction for multi-camera device with various fields of view
US9068831B2 (en) Image processing apparatus and image processing method
US8542313B2 (en) Depth from defocus calibration
CN101543055B (en) Determination of mechanical shutter exposure time
CN102281401B (en) Thermal imaging camera
US9910247B2 (en) Focus hunting prevention for phase detection auto focus (AF)
CN112689100B (en) Image detection method, device, equipment and storage medium
CN113329188B (en) Exposure control method and device, electronic equipment and storage medium
CN111028179A (en) Stripe correction method and device, electronic equipment and storage medium
KR20160009638A (en) Automated gain matching for multiple microphones
CN110830789A (en) Overexposure detection method and device and overexposure suppression method and device
CN110913129B (en) Focusing method, device, terminal and storage device based on BP neural network
CN109031333B (en) Distance measuring method and device, storage medium, and electronic device
US7880780B2 (en) Sensor apparatus and method for noise reduction
WO2019223538A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN110673428B (en) Structured light compensation method, device and equipment
CN111199225B (en) License plate calibration method and device
KR101436076B1 (en) Image processor and data processing method thereof
CN115546306B (en) Camera calibration method and device, electronic equipment and readable storage medium
US11951982B2 (en) Lane following assist apparatus and control method thereof
CN112929576B (en) Image processing method, device, equipment and storage medium
CN113329170B (en) Image shake correction method, image shake correction apparatus, computer device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant