WO2021046715A1 - 曝光时间计算方法、装置及存储介质 - Google Patents

曝光时间计算方法、装置及存储介质 Download PDF

Info

Publication number
WO2021046715A1
WO2021046715A1 PCT/CN2019/105156 CN2019105156W WO2021046715A1 WO 2021046715 A1 WO2021046715 A1 WO 2021046715A1 CN 2019105156 W CN2019105156 W CN 2019105156W WO 2021046715 A1 WO2021046715 A1 WO 2021046715A1
Authority
WO
WIPO (PCT)
Prior art keywords
brightness
target object
image
ratio
target
Prior art date
Application number
PCT/CN2019/105156
Other languages
English (en)
French (fr)
Inventor
李明采
王波
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2019/105156 priority Critical patent/WO2021046715A1/zh
Priority to CN201980001903.2A priority patent/CN110731078B/zh
Publication of WO2021046715A1 publication Critical patent/WO2021046715A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the embodiments of the present application relate to the field of image processing technology, and in particular to a method, device, and storage medium for calculating exposure time.
  • one of the technical problems solved by the embodiments of the present application is to provide an exposure time calculation method, device, and storage medium, which are used to overcome the fact that the exposure time is too long or too short in the prior art, which causes the detailed information of the image. Not obvious, affecting the quality of the collected images.
  • an exposure time calculation method which includes:
  • the target exposure time of the target object is calculated according to the brightness of the target object, the preset exposure duration and the target brightness.
  • determining the brightness of the target object according to the first acquired image includes:
  • the brightness distribution sequence of the first acquired image is determined, and the brightness of the target object is determined according to the brightness distribution sequence.
  • determining the brightness of the target object according to the brightness distribution sequence includes:
  • Weight the brightness distribution sequence according to a preset weight matrix is used to indicate the probability that each pixel in a frame of image belongs to the area where the target object is located;
  • the brightness of the target object in the first collected image is determined according to the weighted brightness distribution sequence.
  • the method further includes:
  • weighting the brightness distribution sequence according to a preset weighting matrix includes:
  • the weighted brightness distribution sequence is obtained by multiplying the proportion of the number of pixels of each brightness in the brightness distribution sequence with the corresponding weight in the weighting matrix.
  • Each pixel in the first acquired image corresponds to the weight of the same position in the weighting matrix. Among them, the number of pixels of each brightness is weighted once.
  • determining the brightness of the target object in the first acquired image according to the weighted brightness distribution sequence includes:
  • the brightness of the target object is determined according to the first ratio, the sum of the proportions of the number of pixels less than or equal to the brightness of the target object is equal to the first ratio, and the first ratio is less than or equal to 1.
  • the method further includes:
  • the first ratio is determined according to the second ratio of the number of pixels in the area where the target object is located in the second collected image to the number of pixels in the second collected image, the sum of the first ratio and the second ratio is greater than or equal to 1, and the first ratio is greater than the second proportion.
  • the method further includes:
  • the weight upper limit of the weighting matrix is calculated according to the second ratio, and the weighting matrix is quantized according to the weight upper limit, and the product of the second ratio and the weight upper limit is equal to 100.
  • calculating the target exposure duration of the target object according to the brightness of the target object, the preset exposure duration, and the target brightness includes:
  • the target exposure time of the target object is calculated from the brightness of the target object, the preset exposure time and the target brightness according to the ratio of brightness to exposure time.
  • the ratio of brightness to exposure time is used to indicate the difference between the brightness of the target object and the preset exposure time.
  • the ratio between is equal to the ratio of the target brightness to the target exposure time.
  • the method further includes:
  • the first image is taken as the first acquired image
  • the second image is taken as the first acquired image, and the first threshold is less than the second threshold. Threshold.
  • the method further includes:
  • the first captured image is compressed in a manner of taking 1 pixel for every n pixels.
  • the brightness distribution sequence is expressed in the form of a histogram of the pixel brightness value DN value.
  • the preset exposure duration is shorter than the target exposure duration.
  • an embodiment of the present application provides a computing device, including: an image acquisition module, a brightness determination module, and an exposure module;
  • the image acquisition module is used for image acquisition of the target object with a preset exposure time to obtain the first acquisition image
  • the brightness determination module is configured to determine the brightness of the target object according to the first collected image
  • the exposure module is used to calculate the target exposure time of the target object according to the brightness of the target object, the preset exposure time length and the target brightness.
  • the brightness determination module is specifically configured to determine the brightness distribution sequence of the first acquired image, and determine the brightness of the target object according to the brightness distribution sequence.
  • the brightness determination module is specifically configured to weight the brightness distribution sequence according to a preset weighting matrix, and the weighting matrix is used to indicate that each pixel in a frame of image belongs to the area where the target object is located. Probability; Determine the brightness of the target object in the first acquired image according to the weighted brightness distribution sequence.
  • the computing device further includes a matrix management module
  • the matrix management module is used to obtain at least one sample image of the target object; determine the area where the target object of each sample image is located; determine the weight of each pixel according to the number of times each pixel belongs to the area where the target object is located, and generate a weighting matrix.
  • the brightness determination module is further configured to multiply the proportion of the number of pixels of each brightness in the brightness distribution sequence by the corresponding weight in the weighting matrix to obtain the weighted brightness distribution sequence ,
  • Each pixel in the first acquired image corresponds to the weight value of the same position in the weighting matrix, wherein the number of pixels of each brightness is weighted once.
  • the brightness determination module is further configured to determine the brightness of the target object according to the first ratio in the weighted brightness distribution sequence, and the proportion of the number of pixels less than or equal to the brightness of the target object The sum is equal to the first ratio, and the first ratio is less than or equal to 1.
  • the calculation device further includes a ratio calculation module
  • the ratio calculation module is also used to determine the area where the target object is located in the second captured image of the target object, and the shooting distance of the second captured image is greater than or equal to the preset distance; according to the number of pixels in the area where the target object is located in the second captured image
  • the second ratio of the number of pixels in the second collected image determines the first ratio, the sum of the first ratio and the second ratio is greater than or equal to 1, and the first ratio is greater than the second ratio.
  • the computing device further includes a quantization module
  • the quantization module is further configured to calculate the upper limit of the weight of the weighting matrix according to the second ratio, and quantize the weighting matrix according to the upper limit of the weight, and the product of the second ratio and the upper limit of the weight is equal to 100.
  • the exposure module is further configured to calculate the target exposure time length of the target object according to the ratio relationship between the brightness and the exposure time length, the preset exposure time length, and the target brightness.
  • the proportional relationship with the exposure duration is used to indicate that the ratio between the brightness of the target object and the preset exposure duration is equal to the ratio of the target brightness to the target exposure duration.
  • the computing device further includes a collection module
  • the acquisition module is configured to use the first image as the first acquired image when the maximum brightness of the first image acquired on the target object when the first preset exposure time is the exposure time is greater than or equal to the first threshold; or When the maximum brightness of the second image collected on the target object when the second preset exposure time is the exposure time is less than or equal to the second threshold, the second image is taken as the first collected image, and the first threshold is less than the second threshold.
  • the computing device further includes a compression module
  • the compression module is used to compress the first collected image in a manner of taking 1 pixel for every n pixels.
  • the brightness distribution sequence is expressed in the form of a histogram of the pixel brightness value DN value.
  • the preset exposure duration is shorter than the target exposure duration.
  • an electronic device including:
  • At least one processor At least one processor
  • Storage device for storing at least one program
  • the at least one processor When at least one program is executed by at least one processor, the at least one processor implements the method described in any embodiment of the present application.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, a method as in any embodiment of the present application is implemented.
  • the target object is imaged with the preset exposure time to obtain the first captured image; the brightness of the target object is determined according to the first captured image; the target is calculated according to the brightness of the target object, the preset exposure time and the target brightness
  • the target exposure time of the object because the brightness of the target object is determined, the brightness of the target object can be determined according to the proportional relationship as the target exposure time of the target brightness, so that in the image collected on the target object according to the exposure time of the target object, Because the exposure time is calculated accurately, the brightness of the area where the target object is located is higher, the details are displayed more clearly, and the quality of the captured image is improved.
  • FIG. 1 is a flowchart of an exposure time calculation method provided by an embodiment of the application
  • Figure 2 is a DN value histogram provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of a weighting matrix provided by an embodiment of this application.
  • Figure 4a is a DN value histogram provided by an embodiment of the application.
  • Figure 4b is a DN value histogram provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a linear relationship between brightness and exposure time according to an embodiment of the application.
  • FIG. 6 is a flowchart of a method for obtaining a weighting matrix according to an embodiment of the application
  • FIG. 7 is a schematic diagram of an effect of a region where a target object is located according to an embodiment of the application.
  • FIG. 8 is a flowchart of a ratio calculation method provided by an embodiment of the application.
  • FIG. 9 is a flowchart of an image acquisition method provided by an embodiment of the application.
  • FIG. 10 is a structural diagram of a computing device provided by an embodiment of this application.
  • FIG. 11 is a structural diagram of a computing device provided by an embodiment of this application.
  • FIG. 12 is a structural diagram of a computing device provided by an embodiment of this application.
  • FIG. 13 is a structural diagram of a computing device provided by an embodiment of the application.
  • FIG. 14 is a structural diagram of a computing device provided by an embodiment of the application.
  • FIG. 15 is a structural diagram of a computing device provided by an embodiment of the application.
  • FIG. 16 is a structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 1 is a flowchart of an exposure time calculation method provided by an embodiment of the application; the exposure time calculation method shown in FIG. 1 includes the following steps:
  • Step 101 Perform image capture on a target object with a preset exposure time to obtain a first captured image.
  • the target object may be a human face, a license plate, etc.
  • the target object refers to a certain object that needs to be registered/recognized, and the target is only used to represent a single individual and does not have any limiting effect.
  • the first captured image may be an image obtained by image capturing of the target object when the preset exposure time is the exposure time.
  • the preset exposure time is preferably relatively short, so as to determine the target exposure time that can be adapted to different environments.
  • Step 102 Determine the brightness of the target object according to the first collected image.
  • the brightness of the target object refers to the brightness displayed by the target object in the first collected image.
  • the brightness of the target object is also related to the brightness of the environment. If the brightness of the environment is relatively bright, the exposure will be sufficient, and the brightness of the target object will be greater. If the brightness of the environment is dark, the exposure will be insufficient and the brightness of the target object will be less. .
  • the average value of the pixel brightness of the area where the target object is located is calculated as the brightness of the target object; for another example, it can be determined according to the brightness distribution sequence.
  • determining the brightness of the target object according to the first acquired image includes:
  • the brightness distribution sequence of the first acquired image is determined, and the brightness of the target object is determined according to the brightness distribution sequence.
  • determining the brightness of the target object according to the brightness distribution sequence includes: weighting the brightness distribution sequence according to a preset weighting matrix, and the weighting matrix is used to indicate each pixel in a frame of image The probability of belonging to the area where the target object is located; the brightness of the target object in the first collected image is determined according to the weighted brightness distribution sequence.
  • the brightness distribution sequence of the first captured image may be determined, the brightness distribution sequence may be weighted according to a preset weighting matrix, and the first captured image may be determined according to the weighted brightness distribution sequence The brightness of the target object in the middle.
  • the brightness can be represented by a gray value
  • the brightness distribution sequence of the first collected image can be represented as the gray value histogram of the first collected image, or as the DN of the first collected image (English: Digital Number, pixel brightness value) value histogram.
  • Figure 2 is a DN value histogram provided by an embodiment of this application.
  • the horizontal axis represents the DN value (that is, the brightness/gray value)
  • the vertical axis represents the proportion of the number of pixels.
  • the points in the value histogram represent the proportion of pixels with a certain brightness in the image. It should be noted that in this application, the proportion of the number of pixels is only to indicate the number of pixels.
  • the proportion of the number of pixels can be expressed in the form of a ratio, or the number of pixels can be used directly to express the proportion of the number of pixels. As long as the number of pixels can be expressed, the specific The form of expression is not restricted in this application.
  • the number of pixels may be counted for the brightness of each level in the first acquired image.
  • the brightness of the first captured image represents a total of L bits.
  • L can be a binary number
  • the brightness range is [0, 2 L -1].
  • the value of brightness is The range represents that the brightness value is divided into 2 L levels from the darkest to the brightest.
  • the brightness value range is [0,255], which is the brightness value It is divided into 256 levels.
  • the number of pixels for each brightness can be counted according to the first formula. Taking brightness as an example of gray value, the first formula is as follows:
  • h i represents the number of pixels with gray value i in the first acquired image
  • (x,y) represents the pixel with coordinates (x,y) in the first acquired image
  • m represents the number of pixels in each row of the first acquired image
  • N represents the number of pixels in each column of the first acquired image
  • m ⁇ n is the number of pixels in the first acquired image
  • I(x,y) represents the grayscale of the pixel with coordinates (x,y) in the first acquired image Value
  • C i (x, y) indicates whether the gray value of the pixel with coordinates (x, y) in the first captured image is i
  • C i (x, y) 1 indicates that the coordinates in the first captured image are
  • the gray value of the pixel of x, y) is i
  • this is only an exemplary description, which does
  • the first formula calculates the number of pixels for each gray value, and the number of pixels for each gray value is divided by the total number of pixels in the first collected image to obtain the proportion of the number of pixels for each gray value.
  • the weight matrix is used to indicate the probability that each pixel in a frame of image belongs to the area where the target object is located.
  • the area where the target object is located refers to the area where the target object is displayed in the image, or the corresponding imaging area of the target object in the image.
  • the area where the target object is located is preset, for example, according to The empirical value setting can also be set according to experimental data.
  • a frame of image usually consists of many pixels. The number of rows and columns of each pixel can indicate the position of the pixel.
  • the weight of the area where the target object is located is greater than the weight of the area where the non-target object is located. As shown in FIG. 3, FIG.
  • FIG. 3 is a schematic diagram of a weighting matrix provided by an embodiment of the application.
  • a matrix element corresponds to a pixel at a position, and the area where the target object is located is in the middle of the image. Therefore, the middle part of the weighting matrix and the target In the area corresponding to the object, the weight value is mostly 40, and the weight value in the area where the non-target object is located is 1.
  • the proportion of the number of pixels corresponding to the brightness in the area where the target object is located will be weighted.
  • the number of pixels corresponding to the brightness in the area where the non-target object is located remains unchanged.
  • the weight in the area where the non-target object is located can also be less than 1.
  • the brightness in the area where the non-target object is located corresponds to The proportion of the number of pixels will become smaller.
  • weighting the brightness distribution sequence according to a preset weighting matrix includes: comparing the proportion of the number of pixels of each brightness in the brightness distribution sequence with the corresponding weight in the weighting matrix. The value is multiplied to obtain a weighted brightness distribution sequence, and each pixel in the first acquired image corresponds to the weight value of the same position in the weighting matrix, wherein the number of pixels of each brightness is weighted once.
  • FIG. 4a is a DN value histogram provided by an embodiment of the application.
  • the DN value histogram shown in FIG. 4a is the DN before the weighting of the percentage of each DN value in the first captured image.
  • the abscissa represents the DN value
  • the ordinate represents the proportion of the number of pixels.
  • each brightness in the brightness distribution sequence refers to each brightness value within the brightness value range.
  • the proportion of the number of pixels of each brightness can be multiplied by the corresponding weight value, or the value of each brightness can be multiplied by the corresponding weight.
  • the number of pixels is multiplied by the weight.
  • the weighting matrix can be the same as the size of the first acquired image.
  • Each pixel corresponds to a weight.
  • the proportion of the number of pixels corresponding to the brightness of the pixel and the weight are calculated Multiply to obtain the new pixel quantity ratio of the brightness, and weight the pixel quantity ratio corresponding to the brightness of each pixel in this way to obtain the weighted brightness distribution sequence.
  • the brightness is expressed as a gray value as an example.
  • the number of pixels of each brightness can be weighted according to the second formula.
  • the number of pixels of each brightness is divided by the total number of pixels in the first captured image.
  • the number is the proportion of the number of pixels of each brightness.
  • the number of pixels of each brightness is taken as an example.
  • the second formula is as follows:
  • the third formula is as follows:
  • P s represents the sum of pixels whose gray value is less than or equal to s after weighting, and s is an integer within [1,2 L -1]. If P s-1 is less than the preset threshold, and P s is greater than the preset threshold, then s is determined as the brightness of the target object. Because the number of pixels in the area where the target object is located is weighted, the number of pixels becomes larger. Therefore, the gray value s that can make the number of pixels greater than or equal to the preset threshold can be determined as the gray value with a relatively large number of pixels (ie The gray value of the area where the object is located), therefore, s can be determined as the brightness of the target object.
  • h i represents the number of pixels with gray value i in the first captured image
  • L represents the number of bits of the gray value.
  • h i' represents the number of pixels whose gray value is i after weighting. In the calculation process, there may be multiple pixels with the same gray value. In an optional implementation, each gray value is weighted only once. If there are multiple pixels with the same gray value, you can follow The preset priority or preset order is weighted. For example, the weighting calculation starts from the area where the target object is located.
  • the preset threshold can be set Set a larger value, because the gray values of the area where the target object is located are all weighted; for another example, you can start from the first row and the first column and start weighting row by row according to the arrangement order of the pixels in the first collected image, or Weighting is performed column by column. At this time, the preset threshold can be set smaller, because the pixels in the area where the target object is located may not be weighted. Mark the weighted gray value, and if it encounters the pixel with the gray value again, it will be skipped directly.
  • T i indicates whether the number of pixels with a gray value of i has been weighted
  • determining the brightness of the target object in the first acquired image according to the weighted brightness distribution sequence includes: determining the brightness of the target object according to a first ratio in the weighted brightness distribution sequence, which is less than The sum of the proportions of the number of pixels equal to or equal to the brightness of the target object is equal to the first ratio, and the first ratio is less than or equal to 1.
  • the first ratio may be 95% or 97.5%, etc., which is not limited in this application. Referring to Figures 4a and 4b, the first ratio is the ratio accumulated from the minimum brightness upward.
  • the number of pixels of each brightness is added up to get the sum of the number of pixels. If added to the percentage of the number of pixels with brightness i, the sum of the number of pixels is equal to the first ratio , The brightness i is taken as the brightness of the target object.
  • determining the brightness of the target object in the first acquired image according to the weighted brightness distribution sequence includes: weighting and summing all the brightness to obtain the target object according to the proportion of the number of pixels of each brightness The brightness.
  • Step 103 Calculate the target exposure duration of the target object according to the brightness of the target object, the preset exposure duration, and the target brightness.
  • calculating the target exposure duration of the target object according to the brightness of the target object, the preset exposure duration, and the target brightness includes:
  • the target exposure time of the target object is calculated from the brightness of the target object, the preset exposure time and the target brightness according to the ratio of brightness to exposure time.
  • the ratio of brightness to exposure time is used to indicate the difference between the brightness of the target object and the preset exposure time.
  • the ratio between is equal to the ratio of the target brightness to the target exposure time.
  • FIG. 5 is a schematic diagram of a linear relationship between brightness and exposure time according to an embodiment of the application.
  • the brightness when the brightness is less than or equal to 900, the brightness increases linearly with the increase of the exposure time.
  • the brightness increases nonlinearly with the increase of the exposure time, and the increase is very small. Therefore, when the brightness is greater than 900, the image can be considered to be overexposed, and 900 can be used as the target brightness, that is, when the brightness of the target object is 900, the display effect of the image is relatively clear.
  • the target exposure time can be calculated according to the fourth formula, which is as follows:
  • exp short represents the preset exposure time
  • dn short represents the brightness of the target object
  • exp target represents the target exposure time
  • dn target represents the target brightness.
  • Steps 101-103 calculate the brightness of the target object.
  • the preset exposure time is known.
  • the target brightness is the expected brightness set for the target object. Therefore, the target can be obtained according to the proportional relationship of the third formula. Exposure time.
  • the method before step 101, further includes: when the first preset exposure time is taken as the exposure time, the maximum brightness of the first image collected on the target object is greater than or equal to the first When a threshold value, the first image is used as the first captured image; or, when the maximum brightness of the second image captured on the target object with the second preset exposure time as the exposure time is less than or equal to the second threshold, the first image The two images are used as the first acquired image, the first threshold is less than the second threshold, and the first preset exposure duration is less than the second preset exposure duration.
  • the brightness of the first captured image may be required to be between the first threshold and the second threshold, that is, the maximum brightness of the first captured image is greater than or equal to
  • the first threshold is less than or equal to the second threshold, for example, it can be between 100-900, and the captured image can be judged after the image is captured. If the captured image does not meet the brightness of 100-900, Then re-collect.
  • the size of the first threshold and the second threshold can be flexibly configured according to application scenarios.
  • the method before step 101, further includes: compressing the first acquired image in a manner of taking 1 pixel for every n pixels.
  • the amount of calculation is greatly reduced, and the speed of calculating the exposure time is improved.
  • pixels can be extracted with 4 rows and 4 columns at intervals, and the number of pixels is reduced by 16 times.
  • the brightness distribution sequence (DN value histogram/gray value histogram)
  • the time can be reduced by 16 times, and, Because one pixel is uniformly extracted in every 4 rows and 4 columns, the characteristics of the original image are preserved, and the brightness of the short target object can be accurately calculated.
  • the exposure time calculation method provided in the embodiments of the present application can be applied to a scene that uses face recognition for identity verification.
  • the position of the face is relatively fixed, and the target object in the image is located in the area ( That is, the area where the face is located is also relatively fixed, which can calculate the exposure time more accurately, improve the quality of image collection, and facilitate more accurate face recognition.
  • this is only an exemplary description, which does not mean that the application is limited to this.
  • the target object is imaged with the preset exposure time to obtain the first captured image; the brightness of the target object is determined according to the first captured image; the target is calculated according to the brightness of the target object, the preset exposure time and the target brightness
  • the target exposure time of the object because the brightness of the target object is determined, the brightness of the target object can be determined according to the proportional relationship as the target exposure time when the target brightness is the target brightness, so that in the image collected on the target object according to the exposure time of the target object, Because the exposure time is calculated accurately, the brightness of the area where the target object is located is higher, the details are displayed more clearly, and the quality of the captured image is improved.
  • the second embodiment of the present application provides a method for obtaining a weighting matrix, which explains how to obtain a weighting matrix involved in the foregoing embodiment, as shown in FIG. 6.
  • This is a flowchart of a method for obtaining a weighting matrix provided in an embodiment of this application. The method includes the following steps:
  • Step 601 Acquire at least one sample image of the target object.
  • the at least one sample image may be obtained by shooting the target object at a plurality of different shooting distances and a plurality of different exposure times. That is, the shooting distance of each sample image can be the same or different, and the exposure time of each sample image can be the same or different.
  • Step 602 Determine the area where the target object of each sample image is located.
  • a matrix can be created for each sample image, the size of the matrix is the same as the size of the sample image, and a matrix element corresponds to the pixel of a sample image.
  • the area of the target object in sample image A is located at the corresponding position of the matrix Both are marked.
  • the value of the element corresponding to the location of the area where the target object is located is 2, and the value of the element corresponding to the location of the area where the non-target object is located is 1.
  • this is only an exemplary description.
  • Step 603 Determine the weight of each pixel according to the number of times each pixel belongs to the area where the target object is located, and generate a weighting matrix.
  • the weighting matrix can be initialized to a matrix of all ones, that is, after initialization, the value of each element in the weighting matrix is 1, and each element corresponds to a pixel at a position.
  • the weighting matrix is The value of the element corresponding to the location of the target object is increased by 1, and the value of the element corresponding to the location of the non-target object remains unchanged.
  • Each sample image is marked as such. If an element in the matrix is located, the pixel corresponding to its location belongs to the location of the target object The higher the probability of the area, the greater the value of the element.
  • the value of the element corresponding to the area where the target object of any sample image is located is set to a preset value. If a matrix element does not correspond to the area where the target object of any sample image is located, then The value of the matrix element is 1, which is just an exemplary description here, and does not mean that the application is limited to this. As shown in FIG. 7, FIG. 7 is a schematic diagram of an effect of a region where a target object is located according to an embodiment of the application.
  • step 603 the method may further include step 604;
  • Step 604 Calculate the upper limit of the weight of the weighting matrix according to the second ratio, and quantize the weighting matrix according to the upper limit of the weight.
  • the second ratio can be used to determine the upper limit of the weight.
  • the second ratio is 2.5%
  • This formula indicates that if the area where the target object is located is the smallest, the proportion is 2.5%, multiply it by the weight, and zoom in 40 times to equal 1, that is, the DN value histogram is all the target object area, and the number of pixels in the area where the weighted target object is guaranteed is much higher than other areas, so you can It is easy to determine the brightness of the target object, of course, this is just an example. How to calculate the second ratio is described in detail in the third embodiment.
  • the largest element in the weighting matrix is 80 and the preset range is [1,40], all elements greater than 1 can be multiplied by 1/2, so that the value of all elements is less than or equal to 40.
  • the largest element in the weighting matrix is 80, and the preset range is [1,40]. You can set all elements less than or equal to 10 to 1, and set all elements greater than 10 to 40.
  • this is only an exemplary description, and does not mean that the application is limited to this.
  • FIG. 8 is a flowchart of a method for calculating a ratio according to an embodiment of the application, and the method includes the following steps:
  • Step 801 Acquire a second collected image of the target object.
  • the shooting distance of the second collected image is greater than or equal to the preset distance.
  • the target object may be a human face
  • the preset distance may be 1.2 m
  • the preset distance may be the limit shooting distance of the face. If the distance is greater than the preset distance, the face cannot be recognized.
  • Step 802 Determine the area where the target object is located in the second captured image of the target object.
  • Step 803 Determine the first ratio according to the second ratio of the number of pixels in the area where the target object is located in the second collected image to the number of pixels in the second collected image.
  • the sum of the first ratio and the second ratio is greater than or equal to 1, and the first ratio is greater than the second ratio.
  • the second ratio is: 165 ⁇ 167/768 ⁇ 1308, which is about 2.74%
  • the first ratio can be greater than 97.26 %, a ratio less than 100%, such as 97.5%, because the shooting distance of the second captured image is already the limit distance. Therefore, the second ratio can be considered as the minimum proportion of the target object.
  • the DN value histogram Among them, 2.5% of the maximum value can be regarded as the area where the non-target object is located. Of course, it can also be 2.74% or 2.6%.
  • the method can be applied to an exposure time calculation device, which can be a camera device, such as an infrared camera, a smart phone, a digital camera, a tablet computer, and other electronic equipment , Usually calculate the exposure time before taking an image.
  • a camera device can take a face image for identity verification; another example is a camera device on a door of a residential unit can take a face image for identity verification.
  • the exposure time calculation device may have a shooting function.
  • An embodiment of the application provides an image acquisition method. Refer to FIG. 9, which is a flowchart of an image acquisition method provided by an embodiment of the application. The method includes the following steps:
  • Step 901 Determine whether the exposure time is the second preset exposure time.
  • the first preset exposure duration is 1 ms
  • the first threshold is 100
  • the second preset exposure duration is 8 ms
  • the second threshold is 900.
  • step 905 When the exposure time is the second preset exposure duration, step 905 is executed, and when the exposure time is not the second preset exposure duration, step 902 is executed.
  • Step 902 Use the exposure time as the first preset exposure time to perform image collection on the face to obtain a first image.
  • Step 903 Determine whether the maximum brightness of the first image is greater than a first threshold.
  • step 904 When the maximum brightness of the first image is greater than the first threshold, step 904 is executed; otherwise, step 908 is executed.
  • Step 904 Set the exposure time to the second preset exposure time length and return to Step 901.
  • Step 905 Use the exposure time as the second preset exposure time to perform image collection on the human face to obtain a second image.
  • Step 906 Determine whether the maximum brightness of the second image is less than a second threshold.
  • step 908 is executed; otherwise, step 907 is executed.
  • Step 907 Set the exposure time to the first preset exposure time length and return to Step 901.
  • this embodiment makes a judgment when acquiring the first captured image. If it is a short exposure of 8ms, judge whether the brightness is greater than the threshold 900, if it is, modify the short exposure time to 1ms, and re-expose one frame of image for calculation ; If it is a 1ms short exposure, judge whether the brightness is less than the threshold 100, if it is, modify the short exposure time to 8ms, and re-expose one frame of image for calculation.
  • Step 908 Use the first image as the first collected image, and obtain a histogram of the DN value of the first collected image.
  • Step 909 Calculate the target exposure time of the face, and set the exposure time according to the target exposure time to collect the face image.
  • step 909 calculating the target exposure time of the human face can be calculated according to the exposure time calculation method described in the first embodiment, which will not be repeated here.
  • the embodiment of the application weights the distribution of the DN value histogram of the short-exposure image through a weighting matrix to increase the proportion of the DN value histogram of the face area, and the brightness of the acquired face is closer to the real area of the face in the first collected image. brightness.
  • the 1ms exposure time is suitable for image acquisition outdoors, and the 8ms exposure time is suitable for image acquisition indoors. It is compatible with the automatic exposure scheme of outdoor indoor infrared cameras, so that the indoor image is not exposed, and the indoor image is not exposed. Able to achieve better calculation accuracy.
  • an embodiment of the present application provides a computing device for executing the methods described in the first to fourth embodiments.
  • the computing device 100 includes: Image acquisition module 1001, brightness determination module 1002, and exposure module 1003;
  • the image acquisition module 1001 is configured to perform image acquisition on the target object with a preset exposure time length to obtain the first acquired image
  • the brightness determination module 1002 is configured to determine the brightness of the target object according to the first collected image
  • the exposure module 1003 is used to calculate the target exposure duration of the target object according to the brightness of the target object, the preset exposure duration and the target brightness.
  • the brightness determination module 1002 is specifically configured to determine the brightness distribution sequence of the first acquired image, and determine the brightness of the target object according to the brightness distribution sequence.
  • the brightness determining module 1002 is specifically configured to weight the brightness distribution sequence according to a preset weighting matrix.
  • the weighting matrix is used to indicate that each pixel in a frame of image belongs to the target object. Probability of the region; the brightness of the target object in the first acquired image is determined according to the weighted brightness distribution sequence.
  • the computing device 100 further includes a matrix management module 1004;
  • the matrix management module 1004 is used to obtain at least one sample image of the target object, determine the area where the target object of each sample image is located; determine the weight of each pixel according to the number of times each pixel belongs to the area where the target object is located, and generate a weighting matrix .
  • the brightness determination module 1002 is further configured to multiply the proportion of the number of pixels of each brightness in the brightness distribution sequence with the corresponding weight in the weighting matrix to obtain the weighted brightness distribution In the sequence, each pixel in the first acquired image corresponds to the weight value of the same position in the weighting matrix, where the number of pixels of each brightness is weighted once.
  • the brightness determination module 1002 is further configured to determine the brightness of the target object according to the first ratio in the weighted brightness distribution sequence, which is less than or equal to the target object.
  • the sum of the proportions of the number of pixels of the brightness is equal to the first proportion, which is less than or equal to 1.
  • the calculation device 100 further includes a ratio calculation module 1005;
  • the ratio calculation module 1005 is also used to determine the area where the target object is located in the second captured image of the target object, and the shooting distance of the second captured image is greater than or equal to the preset distance; according to the number of pixels in the area where the target object is located in the second captured image
  • the second ratio that accounts for the number of pixels in the second captured image determines the first ratio, the sum of the first ratio and the second ratio is greater than or equal to 1, and the first ratio is greater than the second ratio.
  • the computing device 100 further includes a quantization module 1006;
  • the quantization module 1006 is further configured to calculate the upper limit of the weight value of the weighting matrix according to the second ratio, and quantize the weighting matrix according to the upper limit of the weight value, and the product of the second ratio and the upper limit of the weight value is equal to 100.
  • the exposure module 1003 is further configured to calculate the target exposure duration of the target object according to the ratio relationship between the brightness and the exposure duration, the brightness of the target object, the preset exposure duration and the target brightness,
  • the ratio of the brightness to the exposure duration is used to indicate that the ratio between the brightness of the target object and the preset exposure duration is equal to the ratio of the target brightness to the target exposure duration.
  • the computing device 100 further includes a collection module 1007;
  • the acquisition module 1007 is configured to use the first image as the first acquired image when the maximum brightness of the first image acquired on the target object when the first preset exposure duration is the exposure time is greater than or equal to the first threshold; or, When the maximum brightness of the second image acquired on the target object when the second preset exposure duration is the exposure time is less than or equal to the second threshold, the second image is taken as the first acquired image, and the first threshold is less than the second threshold.
  • the computing device 100 further includes a compression module 1008;
  • the compression module 1009 is configured to compress the first collected image in a manner of taking 1 pixel for every n pixels.
  • the brightness distribution sequence is expressed in the form of a histogram of the pixel brightness value DN value.
  • the preset exposure duration is shorter than the target exposure duration.
  • an embodiment of the present application provides an electronic device for executing the methods described in the first to fourth embodiments.
  • the The electronic device 160 includes: at least one processor 1602; a memory 1604, configured to store at least one program 1606, when the at least one program 1606 is executed by the at least one processor 1602, so that the at least one processor 1602 implements as in the first to fourth embodiments The described method.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the feature is that when the program is executed by a processor, the implementation is as in the first to fourth embodiments.
  • the computing devices and electronic equipment of the embodiments of the present application exist in various forms, including but not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communications.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has calculation and processing functions, and generally also has mobile Internet features.
  • Such terminals include: PDA, MID and UMPC devices, such as iPad.
  • Portable entertainment equipment This type of equipment can display and play multimedia content.
  • Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
  • Server A device that provides computing services.
  • the structure of a server includes a processor 810, hard disk, memory, system bus, etc.
  • the server is similar to a general computer architecture, but because it needs to provide highly reliable services, it is High requirements in terms of performance, reliability, security, scalability, and manageability.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
  • controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic.
  • controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
  • the same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, a device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on at least one computer-usable storage medium (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • a computer-usable storage medium including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes at least one processor (CPU), input/output interface, network interface, and memory.
  • the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • this application can be provided as a method, a system, or a computer program product. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on at least one computer-usable storage medium (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • a computer-usable storage medium including but not limited to disk storage, CD-ROM, optical storage, etc.
  • This application may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific transactions or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, transactions are executed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种曝光时间计算方法、装置及存储介质,曝光时间计算方法包括:以预设曝光时长对目标对象进行图像采集,得到第一采集图像;根据第一采集图像确定目标对象的亮度;根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长。因为曝光时长计算准确,目标对象所在区域的亮度较高,细节显示更加清晰,提高了采集到的图像的质量。

Description

曝光时间计算方法、装置及存储介质 技术领域
本申请实施例涉及图像处理技术领域,尤其涉及曝光时间计算方法、装置及存储介质。
背景技术
因为图像能够更加直观的展示信息,图像采集的发展越来越重要,尤其是对一些特定对象的图像采集。例如,人脸图像采集、车牌图像采集等。其中图像的质量会受到摄像头的感光器件(英文:sensor)、镜头lens的差异、以及曝光时间的影响。在识别一些特定对象时,因为曝光时间过长或过短,会出现过曝光过度或者曝光不足的情况,使得图像的细节信息不明显,影响采集到的图像的质量。
发明内容
有鉴于此,本申请实施例所解决的技术问题之一在于提供一种曝光时间计算方法、装置及存储介质,用以克服现有技术中因为曝光时间过长或过短,使得图像的细节信息不明显,影响采集到的图像的质量缺陷。
第一方面,本申请实施例提供了一种曝光时间计算方法,该方法包括:
以预设曝光时长对目标对象进行图像采集,得到第一采集图像;
根据第一采集图像确定目标对象的亮度;
根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长。
可选地,在本申请的一个实施例中,根据第一采集图像确定目标对象的亮度,包括:
确定第一采集图像的亮度分布序列,根据亮度分布序列确定目标对象的亮度。
可选地,在本申请的一个实施例中,根据亮度分布序列确定目标对象的亮度,包括:
根据预设的加权矩阵对亮度分布序列进行加权,加权矩阵用于指示一帧图像中每个像素属于目标对象所在区域的概率;
根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度。
可选地,在本申请的一个实施例中,该方法还包括:
获取目标对象的至少一个样本图像;在至少一个样本图像中,确定每个样本图像的目标对象所在区域;根据每个像素属于目标对象所在区域的次数确定每个像素的权值,并生成加权矩阵。
可选地,在本申请的一个实施例中,根据预设的加权矩阵对亮度分布序列进行加权,包括:
将亮度分布序列中每个亮度的像素数量占比与加权矩阵中对应的权值相乘得到加权后的亮度分布序列,第一采集图像中每个像素与加权矩阵中相同位置的权值对应,其中,每个亮度的像素数量占比加权一次。
可选地,在本申请的一个实施例中,根据加权后的亮度分布序列确定第一采集图像中 目标对象的亮度,包括:
在加权后的亮度分布序列中按照第一比例确定目标对象的亮度,小于或等于目标对象的亮度的像素数量占比之和等于第一比例,第一比例小于或等于1。
可选地,在本申请的一个实施例中,该方法还包括:
在目标对象的第二采集图像中确定目标对象所在区域,第二采集图像的拍摄距离大于或等于预设距离;
根据第二采集图像中目标对象所在区域的像素数量占第二采集图像像素数量的第二比例确定第一比例,第一比例与第二比例之和大于或等于1,且第一比例大于第二比例。
可选地,在本申请的一个实施例中,该方法还包括:
根据第二比例计算加权矩阵的权值上限,并根据权值上限对加权矩阵进行量化,第二比例与权值上限的乘积等于100。
可选地,在本申请的一个实施例中,根据目标对象的亮度、预设曝光时长以及目标亮度计算目标对象的目标曝光时长,包括:
根据亮度与曝光时长的比例关系对目标对象的亮度、预设曝光时长以及目标亮度计算得到目标对象的目标曝光时长,亮度与曝光时长的比例关系用于指示目标对象的亮度与预设曝光时长之间的比例等于目标亮度与目标曝光时长的比例。
可选地,在本申请的一个实施例中,该方法还包括:
在以第一预设曝光时长为曝光时间时对目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将第一图像作为第一采集图像;
或者,在以第二预设曝光时长为曝光时间时对目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将第二图像作为第一采集图像,第一阈值小于第二阈值。
可选地,在本申请的一个实施例中,该方法还包括:
按照每n个像素取1个像素的方式对第一采集图像进行压缩。
可选地,在本申请的一个实施例中,亮度分布序列以像元亮度值DN值直方图的形式表现。
可选地,在本申请的一个实施例中,预设曝光时长短于目标曝光时长。
第二方面,本申请实施例提供了一种计算装置,包括:图像采集模块、亮度确定模块及曝光模块;
其中,图像采集模块,用于以预设曝光时长对目标对象进行图像采集,得到第一采集图像;
亮度确定模块,用于根据第一采集图像确定目标对象的亮度;
曝光模块,用于根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长。
可选地,在本申请的一个实施例中,亮度确定模块,具体用于确定第一采集图像的亮度分布序列,根据亮度分布序列确定目标对象的亮度。
可选地,在本申请的一个实施例中,亮度确定模块,具体用于根据预设的加权矩阵对 亮度分布序列进行加权,加权矩阵用于指示一帧图像中每个像素属于目标对象所在区域的概率;根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度。
可选地,在本申请的一个实施例中,计算装置还包括矩阵管理模块;
矩阵管理模块,用于获取目标对象的至少一个样本图像;确定每个样本图像的目标对象所在区域;根据每个像素属于目标对象所在区域的次数确定每个像素的权值,并生成加权矩阵。
可选地,在本申请的一个实施例中,亮度确定模块,还用于将亮度分布序列中每个亮度的像素数量占比与加权矩阵中对应的权值相乘得到加权后的亮度分布序列,第一采集图像中每个像素与加权矩阵中相同位置的权值对应,其中,每个亮度的像素数量占比加权一次。
可选地,在本申请的一个实施例中,亮度确定模块,还用于在加权后的亮度分布序列中按照第一比例确定目标对象的亮度,小于或等于目标对象的亮度的像素数量占比之和等于第一比例,第一比例小于或等于1。
可选地,在本申请的一个实施例中,计算装置还包括比例计算模块;
比例计算模块,还用于在目标对象的第二采集图像中确定目标对象所在区域,第二采集图像的拍摄距离大于或等于预设距离;根据第二采集图像中目标对象所在区域的像素数量占第二采集图像像素数量的第二比例确定第一比例,第一比例与第二比例之和大于或等于1,且第一比例大于第二比例。
可选地,在本申请的一个实施例中,计算装置还包括量化模块;
量化模块,还用于根据第二比例计算加权矩阵的权值上限,并根据权值上限对加权矩阵进行量化,第二比例与权值上限的乘积等于100。
可选地,在本申请的一个实施例中,曝光模块,还用于根据亮度与曝光时长的比例关系对目标对象的亮度、预设曝光时长以及目标亮度计算得到目标对象的目标曝光时长,亮度与曝光时长的比例关系用于指示目标对象的亮度与预设曝光时长之间的比例等于目标亮度与目标曝光时长的比例。
可选地,在本申请的一个实施例中,计算装置还包括采集模块;
采集模块,用于在以第一预设曝光时长为曝光时间时对目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将第一图像作为第一采集图像;或者,在以第二预设曝光时长为曝光时间时对目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将第二图像作为第一采集图像,第一阈值小于第二阈值。
可选地,在本申请的一个实施例中,计算装置还包括压缩模块;
压缩模块,用于按照每n个像素取1个像素的方式对第一采集图像进行压缩。
可选地,在本申请的一个实施例中,亮度分布序列以像元亮度值DN值直方图的形式表现。
可选地,在本申请的一个实施例中,预设曝光时长短于目标曝光时长。
第三方面,本申请实施例提供了一种电子设备,包括:
至少一个处理器;
存储装置,用于存储至少一个程序,
当至少一个程序被至少一个处理器执行,使得至少一个处理器实现如本申请任一实施例所描述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本申请任一实施例的方法。
本申请实施例中,以预设曝光时长对目标对象进行图像采集,得到第一采集图像;根据第一采集图像确定目标对象的亮度;根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长,因为确定了目标对象的亮度,就可以根据比例关系确定出目标对象的亮度为目标亮度时的目标曝光时长,使得根据目标对象的曝光时长对目标对象采集得到的图像中,因为曝光时长计算准确,目标对象所在区域的亮度较高,细节显示更加清晰,提高了采集到的图像的质量。
附图说明
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比例绘制的。附图中:
图1为本申请实施例提供的一种曝光时间计算方法的流程图;
图2为本申请实施例提供的一种DN值直方图;
图3为本申请实施例提供的一种加权矩阵示意图;
图4a为本申请实施例提供的一种DN值直方图;
图4b为本申请实施例提供的一种DN值直方图;
图5为本申请实施例提供的一种亮度与曝光时间的线性关系示意图;
图6为本申请实施例提供的一种加权矩阵获取方法的流程图;
图7为本申请实施例提供的一种目标对象所在区域效果示意图;
图8为本申请实施例提供的一种比例计算方法的流程图;
图9为本申请实施例提供的一种图像采集方法的流程图;
图10为本申请实施例提供的一种计算装置的结构图;
图11为本申请实施例提供的一种计算装置的结构图;
图12为本申请实施例提供的一种计算装置的结构图;
图13为本申请实施例提供的一种计算装置的结构图;
图14为本申请实施例提供的一种计算装置的结构图;
图15为本申请实施例提供的一种计算装置的结构图。
图16为本申请实施例提供的一种电子设备的结构图。
具体实施方式
实施本申请实施例的任一技术方案必不一定需要同时达到以上的所有优点。
为了使本领域的人员更好地理解本申请实施例中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请实施例一部分实施例,而不是全部的实施例。基于本申请实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本申请实施例保护的范围。
下面结合本申请实施例附图进一步说明本申请实施例具体实现。
实施例一、
图1为本申请实施例提供的一种曝光时间计算方法的流程图;图1所示的曝光时间计算方法包括以下步骤:
步骤101、以预设曝光时长对目标对象进行图像采集,得到第一采集图像。
本申请实施例中,目标对象可以是人脸、车牌等,本申请中,目标对象指的是需要注册/识别的某一个物体,目标只是用于表示单独个体,并不具有任何限定性作用。第一采集图像可以是以预设曝光时长为曝光时间时对目标对象进行图像采集得到的图像。本实施例中,预设曝光时长优选相对较短,以确定可适应不同环境的目标曝光时长。
步骤102、根据第一采集图像确定目标对象的亮度。
需要说明的是,目标对象的亮度指的是第一采集图像中目标对象所显示的亮度。当然,目标对象的亮度也跟环境亮度有关,如果环境亮度比较亮,则曝光充分,目标对象的亮度也会较大,如果环境亮度比较暗,则曝光不充分,目标对象的亮度也就较小。
确定目标对象的亮度可以有多种实现方式,例如,在第一采集图像中,计算目标对象所在区域的像素亮度平均值作为目标对象的亮度;又如,可以根据亮度分布序列确定。
可选地,在本申请的一个实施例中,根据第一采集图像确定目标对象的亮度,包括:
确定第一采集图像的亮度分布序列,根据亮度分布序列确定目标对象的亮度。
可选地,在本申请的一个实施例中,根据亮度分布序列确定目标对象的亮度,包括:根据预设的加权矩阵对亮度分布序列进行加权,加权矩阵用于指示一帧图像中每个像素属于目标对象所在区域的概率;根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度。
可选地,在本申请的一个实施例中,可以确定第一采集图像的亮度分布序列,根据预设的加权矩阵对亮度分布序列进行加权,并根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度。
此处,对确定第一采集图像的亮度分布序列进行详细说明:
本申请实施例中,亮度可以用灰度值表示,第一采集图像的亮度分布序列可以表示为第一采集图像的灰度值直方图,也可以表示为第一采集图像的DN(英文:Digital Number,像元亮度值)值直方图。参照图2所示,图2为本申请实施例提供的一种DN值直方图,图2中,横轴表示DN值(即亮度/灰度值),纵轴表示像素数量占比,该DN值直方图中的点表示图像中某一亮度的像素数量占比。需要说明的是,本申请中,像素数量占比只是为了表示像素数量,可以用比例的形式表示像素数量占比,也可以直接用像素数量表示像素数量占比,只要能够表示像素数量,其具体表现形式本申请不作限制。
可选地,在一个实施例中,可以对第一采集图像中每个层级的亮度统计像素数量。例 如,第一采集图像的亮度表示一共有L位,此处,L可以是2进制的位数,则其亮度的范围为[0,2 L-1],需要说明的是,亮度取值范围代表的是亮度值从最暗到最亮被划分为2 L个等级,比如,L=8,最大亮度为“11111111”,一共8位,亮度的取值范围是[0,255],即亮度值被划分为256个级别,当然,此处只是示例性说明。可以按照第一公式对每一个亮度统计其像素数量,以亮度表示为灰度值为例,第一公式如下:
Figure PCTCN2019105156-appb-000001
其中,
Figure PCTCN2019105156-appb-000002
h i表示第一采集图像中灰度值为i的像素的数量,(x,y)表示第一采集图像中坐标为(x,y)的像素,m表示第一采集图像中每行像素数量,n表示第一采集图像中每列像素数量,m×n即为第一采集图像的像素数量,I(x,y)表示第一采集图像中坐标为(x,y)的像素的灰度值,C i(x,y)表示第一采集图像中坐标为(x,y)的像素的灰度值是否为i,C i(x,y)=1表示第一采集图像中坐标为(x,y)的像素的灰度值为i,C i(x,y)=0表示第一采集图像中坐标为(x,y)的像素的灰度值不为i。当然,此处只是示例性说明,并不代表本申请局限于此。
当然,第一公式计算的是每个灰度值的像素数量,将每个灰度值的像素数量除以第一采集图像的像素总数量即可得到每个灰度值的像素数量占比。
此处,对加权矩阵进行详细说明:
加权矩阵用于指示一帧图像中每个像素属于目标对象所在区域的概率。目标对象所在区域指的是目标对象在图像中显示的区域,或者说是目标对象在图像中的对应成像区域,在本实施例中,目标对象所在区域是预先设定好的,例如,可以根据经验值设定,也可以根据实验数据进行设定。一帧图像通常由许多个像素组成,每个像素的行数和列数可以表示该像素的位置,加权矩阵中,目标对象所在区域的权值大于非目标对象所在区域的权值。如图3所示,图3为本申请实施例提供的一种加权矩阵示意图,图3中,一个矩阵元素对应一个位置的像素,目标对象所在区域位于图像中间,因此,加权矩阵中间部分与目标对象对应的区域中,权值绝大部分都是40,而非目标对象所在区域中权值都为1,在加权时,目标对象所在区域中的亮度对应的像素数量占比经过加权后就会变大,而非目标对象所在区域中的亮度对应的像素数量占比不变,当然,非目标对象所在区域中的权值也可以小于1,在加权时,非目标对象所在区域中的亮度对应的像素数量占比就会变小。
可选地,在本申请的一个实施例中,根据预设的加权矩阵对所述亮度分布序列进行加权,包括:将亮度分布序列中每个亮度的像素数量占比与加权矩阵中对应的权值相乘得到加权后的亮度分布序列,第一采集图像中每个像素与加权矩阵中相同位置的权值对应,其中,每个亮度的像素数量占比加权一次。
此处,对如何确定第一采集图像中目标对象的亮度进行详细说明:
参照图4a所示,图4a为本申请实施例提供的一种DN值直方图,图4a所示的DN值直方图是第一采集图像中每个DN值的像素数量占比加权之前的DN值直方图;参照图4b所示,图4b为本申请实施例提供的一种DN值直方图,图4b所示的直方图是第一采集图像中每个DN值的像素数量占比加权之后的DN值直方图。图4a和图4b中,横坐标表示DN值, 纵坐标表示像素数量占比,对比图4b和图4a,因为目标对象所在区域中的DN值对应的像素数量占比经过加权后变大,因此,体现在DN值直方图中,DN值在(0,100)之间的像素数量占比明显增加。在像素数量占比明显增加的亮度区间内确定的目标对象的亮度就更加准确。
需要说明的是,亮度分布序列中每个亮度指的是亮度取值范围内的每个亮度值,可以对每个亮度的像素数量占比乘以对应的权值,也可以对每个亮度的像素数量乘以权值,加权矩阵可以和第一采集图像的大小相同,每个像素对应一个权值,对于位置相互对应的像素和权值,将像素的亮度对应的像素数量占比与权值相乘得到该亮度新的像素数量占比,对每个像素的亮度对应的像素数量占比都这样进行加权,即可得到加权后的亮度分布序列。此处,以亮度表示为灰度值为例进行说明,例如,可以根据第二公式计算对每个亮度的像素数量进行加权,当然,每个亮度的像素数量除以第一采集图像的像素总数量即为每个亮度的像素数量占比,此处以每个亮度的像素数量为例进行说明,第二公式如下:
h′ i=h i×M i(x,y)(i=0,1,…2 L-1),if(T i=0);
再根据第三公式将每个亮度加权后的像素数量进行求和,第三公式如下:
Figure PCTCN2019105156-appb-000003
其中,P s表示加权后灰度值小于或等于s的像素点之和,s是[1,2 L-1]内的整数。如果P s-1小于预设阈值,P s大于预设阈值,则将s确定为目标对象的亮度。因为对目标对象所在区域的像素数量进行加权后,像素数量变大,所以,能够使得像素数量大于或等于预设阈值的灰度值s可以确定为是像素数量比较大的灰度值(即目标对象所在区域的灰度值),因此,可以将s确定为目标对象的亮度。
需要说明的是,h i表示第一采集图像中灰度值为i的像素的数量,Mi(x,y)表示第一采集图像中像素的灰度值I(x,y)=i且坐标为(x,y)的像素的权值,L表示灰度值的位数。h i’表示加权后的灰度值为i的像素的数量。在计算过程中,灰度值相同的像素点可能有多个,在一个可选地实现方式中,对每一个灰度值只加权一次,如果有多个像素点是相同的灰度,可以按照预设的优先级或者预设的顺序进行加权,比如,加权计算从目标对象所在区域开始,目标对象所在区域加权完成后,再对非目标对象所在区域进行加权,此时,可以将预设阈值设置的大一些,因为目标对象所在区域的灰度值都经过了加权;又如,可以按照第一采集图像中像素点的排列顺序,从第一行第一列开始,逐行进行加权,或者逐列进行加权,此时,可以将预设阈值设置的小一些,因为目标对象所在区域的像素点可能没有加权。对加权后的灰度值进行标记,再次遇到该灰度值的像素,则直接跳过。例如,T i表示灰度值为i的像素数量是否已经加权,T i=1表示灰度值为i的像素数量已经加权,T i=0表示表示灰度值为i的像素数量没有加权。在利用第二公式进行加权之前,将T i(i=0,1,…2 L-1)全部初始化为0,在灰度值为i的像素数量乘以权值后,将T i设置为1。这样就可以保证每个灰度值的像素数量只加权一次。
此处,列举两个示例说明如何根据加权后每个亮度的像素数量确定在第一采集图像中目标对象的亮度:
可选地,在第一个示例中,根据加权后的亮度分布序列确定第一采集图像中目标对象 的亮度,包括:在加权后的亮度分布序列中按照第一比例确定目标对象的亮度,小于或等于目标对象的亮度的像素数量占比之和等于第一比例,第一比例小于或等于1。例如,第一比例可以是95%或97.5%等,本申请对此不作限制。参照图4a和图4b,第一比例是从最小亮度向上累加达到的比例。
按照亮度从小到大的顺序,依次将每个亮度的像素数量占比累加得到像素数量占比之和,如果加到亮度为i的像素数量占比时,像素数量占比之和等于第一比例,则将亮度i作为目标对象的亮度。
可选地,在第二个示例中,根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度,包括:按照每个亮度的像素数量占比对所有亮度进行加权求和得到目标对象的亮度。
例如,对于亮度i,其像素数量占比为ki,则目标对象的亮度P计算如下:
Figure PCTCN2019105156-appb-000004
w=2 L-1,i=(0,1,…2 L-1);当然,此处只是示例性说明,并不代表本申请局限于此。
步骤103、根据目标对象的亮度、预设曝光时长以及目标亮度计算目标对象的目标曝光时长。
可选地,在本申请的一个实施例中,根据目标对象的亮度、预设曝光时长以及目标亮度计算目标对象的目标曝光时长,包括:
根据亮度与曝光时长的比例关系对目标对象的亮度、预设曝光时长以及目标亮度计算得到目标对象的目标曝光时长,亮度与曝光时长的比例关系用于指示目标对象的亮度与预设曝光时长之间的比例等于目标亮度与目标曝光时长的比例。
参照图5所示,图5为本申请实施例提供的一种亮度与曝光时间的线性关系示意图。图5中,在亮度小于或等于900时,随着曝光时间的增加,亮度呈线性增长,在亮度大于900时,随着曝光时间的增加,亮度呈非线性增长,而且增长的幅度很小,因此,在亮度大于900时,可以认为图像属于过度曝光,可以将900作为目标亮度,即在目标对象的亮度为900时,图像的显示效果比较清晰。可以根据第四公式计算目标曝光时长,第四公式如下:
Figure PCTCN2019105156-appb-000005
其中,exp short表示预设曝光时长,dn short表示目标对象的亮度,exp target表示目标曝光时长,dn target表示目标亮度。步骤101-103计算得到了目标对象的亮度,预设曝光时长是已知的,目标亮度是对目标对象设定的期望达到的亮度,因此,根据第三公式的比例关系,即可求得目标曝光时长。
可选地,在本申请的一个实施例中,步骤101之前,该方法还包括:在以第一预设曝光时长为曝光时间时对目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将第一图像作为第一采集图像;或者,在以第二预设曝光时长为曝光时间时对目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将第二图像作为第一采集图像,第一阈值小于第二阈值,第一预设曝光时长小于第二预设曝光时长。结合图5所示的亮度与曝光时间的线性关系,为了保证计算准确,对第一采集图像的亮度可以要求在第一阈值与第二阈值之间,即第一采集图像的最大亮度大于或等于第一阈值,并且小于或等于第二阈值,例如,可以在100-900之间,可以在图像采集后对采集到的图像进行判断,如果采集到的图像不满足 亮度在100-900之间,则重新采集。第一阈值、第二阈值的大小可以根据应用场景灵活配置。
可选地,在本申请的一个实施例中,步骤101之前,该方法还包括:按照每n个像素取1个像素的方式对第一采集图像进行压缩。第一采集图像经过压缩后,计算量大大减小,提高了计算曝光时间的速度。例如,可以采用间隔4行4列的方式提取像素,像素数量减少了16倍,在计算亮度分布序列(DN值直方图/灰度值直方图)的时,可以缩减16倍的时间,而且,因为均匀的在每4行4列中提取一个像素,保留了原图像的特征,能够准确计算出短目标对象的亮度。
可选地,本申请实施例提供的曝光时间计算方法可以应用于利用人脸识别进行身份验证的场景中,通常,在该场景中,人脸的位置相对固定,图像中的目标对象所在区域(即人脸所在区域)也相对固定,可以较为准确的计算曝光时间,提高图像采集的质量,有利于更准确的进行人脸识别。当然,此处只是示例性说明,并不代表本申请局限于此。
本申请实施例中,以预设曝光时长对目标对象进行图像采集,得到第一采集图像;根据第一采集图像确定目标对象的亮度;根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长,因为确定了目标对象的亮度,就可以根据比例关系确定出目标对象的亮度为目标亮度时的目标曝光时长,使得根据目标对象的曝光时长对目标对象采集得到的图像中,因为曝光时长计算准确,目标对象所在区域的亮度较高,细节显示更加清晰,提高了采集到的图像的质量。
实施例二、
基于上述实施例一所描述的曝光时间计算方法,本申请实施例二提供一种加权矩阵获取方法,对上述实施例一种所涉及的加权矩阵如何获取进行说明,参照图6所示,图6为本申请实施例提供的一种加权矩阵获取方法的流程图,该方法包括以下步骤:
步骤601、获取目标对象的至少一个样本图像。
至少一个样本图像可以是在多个不同的拍摄距离,多个不同的曝光时间下对目标对象进行拍摄得到的。即每个样本图像的拍摄距离可以相同或者不同,每个样本图像的曝光时间可以相同或者不同。
步骤602、确定每个样本图像的目标对象所在区域。
可以对每一个样本图像建立一个矩阵,矩阵大小和样本图像大小相同,一个矩阵元素对应一个样本图像的像素,例如,对于样本图像A,将样本图像A中目标对象所在区域在矩阵对应位置的元素都进行标记,例如,样本图像A的矩阵中与目标对象所在区域位置对应的元素取值为2,与非目标对象所在区域位置对应的元素取值为1。当然,此处只是示例性说明。
步骤603、根据每个像素属于目标对象所在区域的次数确定每个像素的权值,并生成加权矩阵。
结合步骤602,可以将加权矩阵初始化为全1矩阵,即初始化后加权矩阵中每个元素的取值为1,每个元素对应一个位置的像素,例如,对于样本图像A,将加权矩阵中,与目标对象所在区域位置对应的元素数值加1,与非目标对象所在区域位置对应的元素数值不变, 每个样本图像都这样标记,如果矩阵中一个元素,与其位置对应的像素属于目标对象所在区域的概率越高,则该元素数值越大。当然,对于加权矩阵,也可以是,任意一个样本图像的目标对象所在区域对应的元素数值都设定为预设数值,如果一个矩阵元素与任意一个样本图像的目标对象所在区域都不对应,则该矩阵元素数值为1,此处只是示例性说明,并不代表本申请局限于此。如图7所示,图7为本申请实施例提供的一种目标对象所在区域效果示意图。
步骤603之后,该方法还可以包括步骤604;
步骤604、根据第二比例计算加权矩阵的权值上限,并根据权值上限对加权矩阵进行量化。
需要说明的是,量化的目的在于将加权矩阵中元素的取值设定在一个预设范围之内。第二比例可以用于确定权值上限,例如,第二比例是2.5%,权值上限可以是1÷2.5%=40,这个式子表示,如果目标对象所在区域为最小时,即占比为2.5%,将其乘以权值,放大40倍等于1,即DN值直方图全部为目标对象区域,保证加权后的目标对象所在区域的像素数量占比远远高于其他区域,这样就可以很容易确定目标对象的亮度,当然,此处只是示例性说明。对于第二比例如何计算,在实施例三中进行详细说明。
例如,加权矩阵中最大的元素是80,预设范围是[1,40],则可以将大于1的元素都乘以1/2,使得所有元素的取值都小于或等于40。又如,加权矩阵中最大的元素是80,预设范围是[1,40],可以将小于或等于10的元素全部设定为1,将大于10的元素全部设定为40,当然,此处只是示例性说明,并不代表本申请局限于此。
实施例三、
基于上述实施例一所描述的曝光时间计算方法,本申请实施例二提供一种比例计算方法,对上述实施例一种所涉及的第一比例和第二比例如何计算进行说明,参照图8所示,图8为本申请实施例提供的一种比例计算方法的流程图,该方法包括以下步骤:
步骤801、获取目标对象的第二采集图像。
第二采集图像的拍摄距离大于或等于预设距离。例如,目标对象可以是人脸,预设距离可以是1.2m,预设距离可以是人脸的极限拍摄距离,大于预设距离,则人脸无法识别。
步骤802、在目标对象的第二采集图像中确定目标对象所在区域。
步骤803、根据第二采集图像中目标对象所在区域的像素数量占第二采集图像像素数量的第二比例确定第一比例。
第一比例与第二比例之和大于或等于1,且第一比例大于第二比例。例如,第二采集图像的大小为768×1308,目标对象所在区域大小为165×167,则第二比例为:165×167/768×1308,约为2.74%,则第一比例可以是大于97.26%,小于100%的比例,如97.5%,因为第二采集图像是的拍摄距离已经是极限距离,因此,第二比例可以认为是目标对象的最小占比,为了减少干扰,将DN值直方图中,最大值的2.5%可以认定为是非目标对象所在区域,当然,也可以是2.74%,或者2.6%。
实施例四、
基于上述实施例一中所描述的曝光时间计算方法,该方法可以应用于曝光时间计算装置,该曝光时间计算装置可以是摄像装置,例如,红外相机、智能手机、数码相机、平板电脑等电子设备,通常在拍摄图像前计算曝光时间。以目标对象是人脸为例,例如,在高铁站入口,摄像装置可以拍摄人脸图像进行身份验证;又如,住宅单元门上的摄像装置可以拍摄人脸图像进行身份验证。在这些应用场景中,曝光时间计算装置可以具有拍摄功能,本申请实施例提供一种图像采集方法,参照图9所示,图9为本申请实施例提供的一种图像采集方法的流程图,该方法包括以下步骤:
步骤901、判断曝光时间是否为第二预设曝光时长。
结合实施例一中步骤104的说明,在本实施例中,第一预设曝光时长为1ms,第一阈值为100,第二预设曝光时长为8ms,第二阈值为900。
在曝光时间为第二预设曝光时长时,执行步骤905,在曝光时间不是第二预设曝光时长时,执行步骤902。
步骤902、以曝光时间为第一预设曝光时长对人脸进行图像采集得到第一图像。
步骤903、判断第一图像的最大亮度是否大于第一阈值。
在第一图像的最大亮度大于第一阈值时,执行步骤904,否则执行步骤908。
步骤904、将曝光时间设定为第二预设曝光时长并返回步骤901。
步骤905、以曝光时间为第二预设曝光时长对人脸进行图像采集得到第二图像。
步骤906、判断第二图像的最大亮度是否小于第二阈值。
在第二图像的最大亮度小于第二阈值时,执行步骤908,否则执行步骤907。
步骤907、将曝光时间设定为第一预设曝光时长并返回步骤901。
结合步骤901-907,本实施例在获取第一采集图像时进行判断,如果是8ms短曝光,判断亮度是否大于阈值900,如果是,则修改短曝光时间为1ms,重新曝一帧图像进行计算;如果是1ms短曝光,判断亮度是否小于阈值100,如果是,则修改短曝光时间为8ms,重新曝一帧图像进行计算。
步骤908、将第一图像作为第一采集图像,获取第一采集图像的DN值直方图。
步骤909、计算人脸的目标曝光时长,根据目标曝光时长设定曝光时间采集人脸图像。
步骤909中,计算人脸的目标曝光时长可以根据实施例一所描述的曝光时间计算方法进行计算,此处不再赘述。
以1ms为曝光时间得到第一采集图像,或者以8ms为曝光时间得到第一采集图像都属于短曝光,利用短曝光的第一采集图像,计算其亮度,根据比例关系及目标亮度,计算得到的曝光时间能够使得人脸的亮度达到目标亮度,人脸细节显示更加清晰。
在室外人脸识别场景中,由于太阳等环境光影响较大,在计算人脸的亮度(即第一采集图像中目标对象的亮度)时,直接选取97.5%占比对应的DN值人脸的亮度与真实人脸区域的亮度偏差较大,计算出的目标曝光时间误差也较大,但是在实际应用场景中,人脸所在区域(即目标对象所在区域)是具有不确定性的。本申请实施例通过加权矩阵对短曝光图像的DN值直方图分布进行加权,提高人脸区域DN值直方图占比,获取的人脸的亮度更接近第 一采集图像中人脸所在区域真实的亮度。在室内人脸识别场景中,因为短曝光得到的第一采集图像噪声过大,也不能准确计算人脸所在区域的亮度。本方案通过动态设置曝光时间,1ms曝光时间适于在室外进行图像采集,8ms曝光时间适于在室内进行图像采集,兼容了室外室内红外相机自动曝光方案,使得在室外室内图像不过曝,室内也能达到较优计算精度。
实施例五、
基于上述实施例一至实施例四所描述的方法,本申请实施例提供了一种计算装置,用于执行实施例一至实施例四所描述的方法,如图10所示,该计算装置100包括:图像采集模块1001、亮度确定模块1002及曝光模块1003;
其中,图像采集模块1001,用于以预设曝光时长对目标对象进行图像采集,得到第一采集图像;
亮度确定模块1002,用于根据第一采集图像确定目标对象的亮度;
曝光模块1003,用于根据目标对象的亮度、预设曝光时长和目标亮度计算目标对象的目标曝光时长。
可选地,在本申请的一个实施例中,亮度确定模块1002,具体用于确定第一采集图像的亮度分布序列,根据亮度分布序列确定目标对象的亮度。
可选地,在本申请的一个实施例中,亮度确定模块1002,具体用于根据预设的加权矩阵对亮度分布序列进行加权,加权矩阵用于指示一帧图像中每个像素属于目标对象所在区域的概率;根据加权后的亮度分布序列确定第一采集图像中目标对象的亮度。
可选地,在本申请的一个实施例中,如图11所示,计算装置100还包括矩阵管理模块1004;
矩阵管理模块1004,用于获取目标对象的至少一个样本图像,确定每个样本图像的目标对象所在区域;根据每个像素属于目标对象所在区域的次数确定每个像素的权值,并生成加权矩阵。
可选地,在本申请的一个实施例中,亮度确定模块1002,还用于将亮度分布序列中每个亮度的像素数量占比与加权矩阵中对应的权值相乘得到加权后的亮度分布序列,第一采集图像中每个像素与加权矩阵中相同位置的权值对应,其中,每个亮度的像素数量占比加权一次。
可选地,在本申请的一个实施例中,如图12所示,亮度确定模块1002,还用于在加权后的亮度分布序列中按照第一比例确定目标对象的亮度,小于或等于目标对象的亮度的像素数量占比之和等于第一比例,第一比例小于或等于1。
可选地,在本申请的一个实施例中,如图12所示,计算装置100还包括比例计算模块1005;
比例计算模块1005,还用于在目标对象的第二采集图像中确定目标对象所在区域,第二采集图像的拍摄距离大于或等于预设距离;根据第二采集图像中目标对象所在区域的像素数量占第二采集图像像素数量的第二比例确定第一比例,第一比例与第二比例之和大于或等于1,且第一比例大于第二比例。
可选地,在本申请的一个实施例中,如图13所示,计算装置100还包括量化模块1006;
量化模块1006,还用于根据第二比例计算加权矩阵的权值上限,并根据权值上限对加权矩阵进行量化,第二比例与权值上限的乘积等于100。
可选地,在本申请的一个实施例中,曝光模块1003,还用于根据亮度与曝光时长的比例关系对目标对象的亮度、预设曝光时长以及目标亮度计算得到目标对象的目标曝光时长,亮度与曝光时长的比例关系用于指示目标对象的亮度与预设曝光时长之间的比例等于目标亮度与目标曝光时长的比例。
可选地,在本申请的一个实施例中,如图14所示,计算装置100还包括采集模块1007;
采集模块1007,用于在以第一预设曝光时长为曝光时间时对目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将第一图像作为第一采集图像;或者,在以第二预设曝光时长为曝光时间时对目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将第二图像作为第一采集图像,第一阈值小于第二阈值。
可选地,在本申请的一个实施例中,如图15所示,计算装置100还包括压缩模块1008;
压缩模块1009,用于按照每n个像素取1个像素的方式对第一采集图像进行压缩。
可选地,在本申请的一个实施例中,亮度分布序列以像元亮度值DN值直方图的形式表现。
可选地,在本申请的一个实施例中,预设曝光时长短于目标曝光时长。
实施例六、
基于上述实施例一至实施例四所描述的方法,本申请实施例提供了一种电子设备,用于执行实施例一至实施例四所描述的方法,如图16所示,本申请实施例提供的电子设备160,包括:至少一个处理器1602;存储器1604,用于存储至少一个程序1606,当至少一个程序1606被至少一个处理器1602执行,使得至少一个处理器1602实现如实施例一至实施例四所描述的方法。
实施例七、
基于上述实施例一至实施例四所描述的方法,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如实施例一至实施例四所描述的方法。
本申请实施例的计算装置及电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器810、硬盘、内存、系统 总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
至此,已经对本主题的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实 现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同至少一个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在至少一个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括至少一个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字 多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在至少一个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定事务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行事务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (28)

  1. 一种曝光时间计算方法,其特征在于,包括:
    以预设曝光时长对目标对象进行图像采集,得到第一采集图像;
    根据所述第一采集图像确定所述目标对象的亮度;
    根据所述目标对象的亮度、所述预设曝光时长和目标亮度计算所述目标对象的目标曝光时长。
  2. 根据权利要求1所述的方法,其特征在于,根据所述第一采集图像确定所述目标对象的亮度,包括:
    确定所述第一采集图像的亮度分布序列,根据所述亮度分布序列确定所述目标对象的亮度。
  3. 根据权利要求2所述的方法,其特征在于,根据所述亮度分布序列确定所述目标对象的亮度,包括:
    根据预设的加权矩阵对所述亮度分布序列进行加权,所述加权矩阵用于指示一帧图像中每个像素属于所述目标对象所在区域的概率;
    根据加权后的所述亮度分布序列确定所述第一采集图像中所述目标对象的亮度。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    获取所述目标对象的至少一个样本图像;
    确定每个样本图像的所述目标对象所在区域;根据每个像素属于所述目标对象所在区域的次数确定每个像素的权值,并生成所述加权矩阵。
  5. 根据权利要求3所述的方法,其特征在于,根据预设的加权矩阵对所述亮度分布序列进行加权,包括:
    将所述亮度分布序列中每个亮度的像素数量占比与所述加权矩阵中对应的权值相乘得到加权后的所述亮度分布序列,所述第一采集图像中每个像素与所述加权矩阵中相同位置的权值对应。
  6. 根据权利要求3所述的方法,其特征在于,根据加权后的所述亮度分布序列确定所述第一采集图像中所述目标对象的亮度,包括:
    在加权后的所述亮度分布序列中按照第一比例确定所述目标对象的亮度,小于或等于所述目标对象的亮度的像素数量占比之和等于所述第一比例,所述第一比例小于或等于1。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在所述目标对象的第二采集图像中确定目标对象所在区域,所述第二采集图像的拍摄距离大于或等于预设距离;
    根据所述第二采集图像中所述目标对象所在区域的像素数量占所述第二采集图像像素数量的第二比例确定所述第一比例,所述第一比例与所述第二比例之和大于或等于1,且所述第一比例大于所述第二比例。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    根据所述第二比例计算所述加权矩阵的权值上限,并根据所述权值上限对所述加权矩阵进行量化,所述第二比例与所述权值上限的乘积等于100。
  9. 根据权利要求1所述的方法,其特征在于,根据所述目标对象的亮度、所述预设曝光时长以及目标亮度计算所述目标对象的目标曝光时长,包括:
    根据亮度与曝光时长的比例关系对所述目标对象的亮度、所述预设曝光时长以及目标亮度计算得到所述目标对象的目标曝光时长,所述亮度与曝光时长的比例关系用于指示所述目标对象的亮度与所述预设曝光时长之间的比例等于所述目标亮度与所述目标曝光时长的比例。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在以第一预设曝光时长为曝光时间时对所述目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将所述第一图像作为所述第一采集图像;
    或者,在以第二预设曝光时长为曝光时间时对所述目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将所述第二图像作为所述第一采集图像,所述第一阈值小于所述第二阈值。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    按照每n个像素取1个像素的方式对所述第一采集图像进行压缩。
  12. 根据权利要求2所述的方法,其特征在于,所述亮度分布序列以像元亮度值DN值直方图的形式表现。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述预设曝光时长短于所述目标曝光时长。
  14. 一种计算装置,其特征在于,包括:图像采集模块、亮度确定模块及曝光模块;
    其中,所述图像采集模块,用于以预设曝光时长对目标对象进行图像采集,得到第一采集图像;
    所述亮度确定模块,用于根据所述第一采集图像确定所述目标对象的亮度;
    所述曝光模块,用于根据所述目标对象的亮度、所述预设曝光时长和目标亮度计算所述目标对象的目标曝光时长。
  15. 根据权利要求14所述的装置,其特征在于,
    所述亮度确定模块,具体用于确定所述第一采集图像的亮度分布序列,根据所述亮度分布序列确定所述目标对象的亮度。
  16. 根据权利要求15所述的装置,其特征在于,
    所述亮度确定模块,具体用于根据预设的加权矩阵对所述亮度分布序列进行加权,所述加权矩阵用于指示一帧图像中每个像素属于所述目标对象所在区域的概率;根据加权后的所述亮度分布序列确定所述第一采集图像中所述目标对象的亮度。
  17. 根据权利要求16所述的装置,其特征在于,所述计算装置还包括矩阵管理模块;
    所述矩阵管理模块,用于获取所述目标对象的至少一个样本图像;确定每个样本图像的目标对象所在区域;根据每个像素属于所述目标对象所在区域的次数确定每个像素的权值,并生成所述加权矩阵。
  18. 根据权利要求16所述的装置,其特征在于,
    所述亮度确定模块,还用于将所述亮度分布序列中每个亮度的像素数量占比与所述加权矩阵中对应的权值相乘得到加权后的所述亮度分布序列,所述第一采集图像中每个像素与所述加权矩阵中相同位置的权值对应,其中,每个亮度的像素数量占比加权一次。
  19. 根据权利要求16所述的装置,其特征在于,
    所述亮度确定模块,还用于在加权后的所述亮度分布序列中按照第一比例确定所述目标对象的亮度,小于或等于所述目标对象的亮度的像素数量占比之和等于所述第一比例,所述第一比例小于或等于1。
  20. 根据权利要求19所述的装置,其特征在于,所述计算装置还包括比例计算模块;
    所述比例计算模块,还用于在目标对象的第二采集图像中确定目标对象所在区域,所述第二采集图像的拍摄距离大于或等于预设距离;根据所述第二采集图像中所述目标对象所在区域的像素数量占所述第二采集图像像素数量的第二比例确定所述第一比例,所述第一比例与所述第二比例之和大于或等于1,且所述第一比例大于所述第二比例。
  21. 根据权利要求20所述的装置,其特征在于,所述计算装置还包括量化模块;
    所述量化模块,还用于根据所述第二比例计算所述加权矩阵的权值上限,并根据所述权值上限对所述加权矩阵进行量化,所述第二比例与所述权值上限的乘积等于100。
  22. 根据权利要求14所述的装置,其特征在于,
    所述曝光模块,还用于根据亮度与曝光时长的比例关系对所述目标对象的亮度、所述预设曝光时长以及目标亮度计算得到所述目标对象的目标曝光时长,所述亮度与曝光时长的比例关系用于指示所述目标对象的亮度与所述预设曝光时长之间的比例等于所述目标亮度与所述目标曝光时长的比例。
  23. 根据权利要求14所述的装置,其特征在于,所述计算装置还包括采集模块;
    所述采集模块,用于在以第一预设曝光时长为曝光时间时对所述目标对象采集得到的第一图像的最大亮度大于或等于第一阈值时,将所述第一图像作为所述第一采集图像;或者,在以第二预设曝光时长为曝光时间时对所述目标对象采集得到的第二图像的最大亮度小于或等于第二阈值时,将所述第二图像作为所述第一采集图像,所述第一阈值小于所述第二阈值。
  24. 根据权利要求14所述的装置,其特征在于,所述计算装置还包括压缩模块;
    所述压缩模块,用于按照每n个像素取1个像素的方式对所述第一采集图像进行压缩。
  25. 根据权利要求15所述的装置,其特征在于,所述亮度分布序列以像元亮度值DN值直方图的形式表现。
  26. 根据权利要求14-25任一项所述的装置,其特征在于,所述预设曝光时长短于所述目标曝光时长。
  27. 一种电子设备,包括:
    至少一个处理器;
    存储装置,用于存储至少一个程序,
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利 要求1-13任一所述的方法。
  28. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-13任一所述的方法。
PCT/CN2019/105156 2019-09-10 2019-09-10 曝光时间计算方法、装置及存储介质 WO2021046715A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/105156 WO2021046715A1 (zh) 2019-09-10 2019-09-10 曝光时间计算方法、装置及存储介质
CN201980001903.2A CN110731078B (zh) 2019-09-10 2019-09-10 曝光时间计算方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105156 WO2021046715A1 (zh) 2019-09-10 2019-09-10 曝光时间计算方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021046715A1 true WO2021046715A1 (zh) 2021-03-18

Family

ID=69226468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105156 WO2021046715A1 (zh) 2019-09-10 2019-09-10 曝光时间计算方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN110731078B (zh)
WO (1) WO2021046715A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554458A (zh) * 2021-07-13 2021-10-26 北京奇艺世纪科技有限公司 一种对象推送方法和装置、电子设备和存储介质
CN114862722A (zh) * 2022-05-26 2022-08-05 广州市保伦电子有限公司 一种图像亮度增强实现方法及处理终端
CN115297267A (zh) * 2022-06-17 2022-11-04 北京极豪科技有限公司 一种用于校准图像采集模组曝光时长的方法以及装置
CN116993653A (zh) * 2022-09-28 2023-11-03 腾讯科技(深圳)有限公司 相机镜头缺陷检测方法、装置、设备、存储介质及产品

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611881B (zh) * 2020-04-30 2023-10-27 深圳阜时科技有限公司 生物特征采集装置和电子设备
CN113824892B (zh) * 2020-06-19 2023-11-07 浙江宇视科技有限公司 图像采集方法、装置、设备及存储介质
CN111970463B (zh) * 2020-08-24 2022-05-03 浙江大华技术股份有限公司 光圈的校正方法及装置、存储介质和电子装置
CN114007020B (zh) * 2021-10-12 2022-11-29 深圳创维-Rgb电子有限公司 图像处理方法、装置、智能终端及计算机可读存储介质
CN114710626B (zh) * 2022-03-07 2024-05-14 北京千方科技股份有限公司 图像采集的方法、装置、电子设备及介质
CN116107636B (zh) * 2023-04-06 2023-06-27 之江实验室 一种硬件加速方法、装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002163A1 (en) * 2005-06-29 2007-01-04 Dariusz Madej Imager settings
CN102523386A (zh) * 2011-12-16 2012-06-27 中国科学院西安光学精密机械研究所 基于直方图均衡化的自动曝光方法
CN102694981A (zh) * 2012-05-11 2012-09-26 中国科学院西安光学精密机械研究所 基于自适应阈值分割的直方图均衡化的自动曝光方法
CN104184958A (zh) * 2014-09-17 2014-12-03 中国科学院光电技术研究所 一种适用于空间探测成像的基于fpga的自动曝光控制方法及其装置
CN104580925A (zh) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 一种控制图像亮度的方法、装置及摄像机
CN104917975A (zh) * 2015-06-01 2015-09-16 北京空间机电研究所 一种基于目标特征的自适应自动曝光方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014035444A (ja) * 2012-08-08 2014-02-24 Nikon Corp 撮影装置
KR20150109177A (ko) * 2014-03-19 2015-10-01 삼성전자주식회사 촬영 장치, 그 제어 방법, 및 컴퓨터 판독가능 기록매체
CN105827995B (zh) * 2016-03-30 2018-03-30 深圳金三立视频科技股份有限公司 基于直方图的自动曝光方法及系统
CN108206918B (zh) * 2016-12-19 2020-07-03 杭州海康威视数字技术股份有限公司 一种光补偿方法及装置
CN108335272B (zh) * 2018-01-31 2021-10-08 青岛海信移动通信技术股份有限公司 一种拍摄图片的方法及设备
CN109218628B (zh) * 2018-09-20 2020-12-08 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002163A1 (en) * 2005-06-29 2007-01-04 Dariusz Madej Imager settings
CN102523386A (zh) * 2011-12-16 2012-06-27 中国科学院西安光学精密机械研究所 基于直方图均衡化的自动曝光方法
CN102694981A (zh) * 2012-05-11 2012-09-26 中国科学院西安光学精密机械研究所 基于自适应阈值分割的直方图均衡化的自动曝光方法
CN104184958A (zh) * 2014-09-17 2014-12-03 中国科学院光电技术研究所 一种适用于空间探测成像的基于fpga的自动曝光控制方法及其装置
CN104580925A (zh) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 一种控制图像亮度的方法、装置及摄像机
CN104917975A (zh) * 2015-06-01 2015-09-16 北京空间机电研究所 一种基于目标特征的自适应自动曝光方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554458A (zh) * 2021-07-13 2021-10-26 北京奇艺世纪科技有限公司 一种对象推送方法和装置、电子设备和存储介质
CN113554458B (zh) * 2021-07-13 2023-09-01 北京奇艺世纪科技有限公司 一种对象推送方法和装置、电子设备和存储介质
CN114862722A (zh) * 2022-05-26 2022-08-05 广州市保伦电子有限公司 一种图像亮度增强实现方法及处理终端
CN115297267A (zh) * 2022-06-17 2022-11-04 北京极豪科技有限公司 一种用于校准图像采集模组曝光时长的方法以及装置
CN115297267B (zh) * 2022-06-17 2023-06-30 天津极豪科技有限公司 一种用于校准图像采集模组曝光时长的方法以及装置
CN116993653A (zh) * 2022-09-28 2023-11-03 腾讯科技(深圳)有限公司 相机镜头缺陷检测方法、装置、设备、存储介质及产品

Also Published As

Publication number Publication date
CN110731078B (zh) 2021-10-22
CN110731078A (zh) 2020-01-24

Similar Documents

Publication Publication Date Title
WO2021046715A1 (zh) 曝光时间计算方法、装置及存储介质
CN109711304B (zh) 一种人脸特征点定位方法及装置
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN109889724B (zh) 图像虚化方法、装置、电子设备及可读存储介质
CN105227857B (zh) 一种自动曝光的方法和装置
US20220223153A1 (en) Voice controlled camera with ai scene detection for precise focusing
WO2020082382A1 (en) Method and system of neural network object recognition for image processing
WO2021046793A1 (zh) 图像采集方法、装置及存储介质
CN108200335A (zh) 基于双摄像头的拍照方法、终端及计算机可读存储介质
CN108234880A (zh) 一种图像增强方法和装置
CN111405185B (zh) 一种摄像机变倍控制方法、装置、电子设备及存储介质
CN114390201A (zh) 对焦方法及其装置
CN113920540A (zh) 基于知识蒸馏的行人重识别方法、装置、设备及存储介质
US8804029B2 (en) Variable flash control for improved image detection
CN111800568B (zh) 补光方法及装置
US9699371B1 (en) Image processing system with saliency integration and method of operation thereof
CN114255177B (zh) 成像中的曝光控制方法、装置、设备及存储介质
CN116129211A (zh) 目标识别方法、装置、设备及存储介质
Choudhary et al. Real time video summarization on mobile platform
US20160323490A1 (en) Extensible, automatically-selected computational photography scenarios
CN108965859B (zh) 投影方式识别方法、视频播放方法、装置及电子设备
CN112949526A (zh) 人脸检测方法和装置
CN113128277A (zh) 一种人脸关键点检测模型的生成方法及相关设备
CN111046232A (zh) 一种视频分类方法、装置及系统
WO2019120017A1 (zh) 照片的调整方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19944782

Country of ref document: EP

Kind code of ref document: A1