WO2019022728A1 - Evaluation of dynamic ranges of imaging devices - Google Patents

Evaluation of dynamic ranges of imaging devices Download PDF

Info

Publication number
WO2019022728A1
WO2019022728A1 PCT/US2017/043850 US2017043850W WO2019022728A1 WO 2019022728 A1 WO2019022728 A1 WO 2019022728A1 US 2017043850 W US2017043850 W US 2017043850W WO 2019022728 A1 WO2019022728 A1 WO 2019022728A1
Authority
WO
WIPO (PCT)
Prior art keywords
snr
roi
dynamic range
image
rois
Prior art date
Application number
PCT/US2017/043850
Other languages
French (fr)
Inventor
Yow-Wei CHENG
Emily Ann MIGINNIS
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to EP17918930.3A priority Critical patent/EP3659333A4/en
Priority to US16/634,106 priority patent/US10958899B2/en
Priority to CN201780095351.7A priority patent/CN111213372B/en
Priority to PCT/US2017/043850 priority patent/WO2019022728A1/en
Publication of WO2019022728A1 publication Critical patent/WO2019022728A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Definitions

  • Dynamic range of an imaging device indicates a sensitivity of the Imaging device.
  • Dynamic range of an imaging device illustrates the difference between a maximum and a minimum strength of signals that are detectable by the imaging device.
  • FIG. 1 Illustrates a computing device for evaluating a dynamic range of an imaging device, according to an example
  • FIG. 2A illustrates a computing device for evaluating a dynamic range of an imaging device, according to an example
  • FIG. 2B illustrates a target Image for evaluating a dynamic range of an imaging device, according to an example
  • FIG. 3 illustrates a system environment implementing a norvtransitory computer readable medium for evaluating a dynamic range of an imaging device, according to an example.
  • dynamic range of an imaging device is measured to determine whether the imaging device meets imaging quality demands. Measurement of the dynamic range of the imaging device provides different outcomes when the imaging device is tested as a stand-alone unit compared with when the imaging device Is tested as a part of a system. For example, when an imaging device is mounted in a system, an enclosure of the system may increase heat and/or reduce light transmission due to optical signal decay resulting from a material surface positioned before the imaging device. This may introduce noise and consequently may degrade the dynamic range of the imaging device.
  • the dynamic range is generally evaluated within a testing facility.
  • the imaging device may be mounted on a stand such that the imaging device Is able to view a target.
  • the imaging device captures a target image at most 10 times and the dynamic range is computed for every target image.
  • the target image is segmented into no less than 25 grayscale patterns to evaluate the dynamic range.
  • the target image is designed to be able to accommodate the 25 grayscale patterns, thereby a size of the target image Increases.
  • a target image with increased size may Increase cost of the testing facility.
  • the target image has to be uniformly illuminated. Providing uniform illumination to the target Image may add on to the cost of the testing facility.
  • analysis of at least 25 grayscale patterns may make dynamic range computations complex and time consuming, especially when the computations have to be performed at least 10 times.
  • the present subject matter describes techniques for evaluating the dynamic range of an imaging device.
  • the techniques of the present subject matter facilitate in evaluating a dynamic range of the imaging device, such as a camera or a scanner, in a faster and accurate manner,
  • the imaging device may be associated with multiple fid u dais.
  • a fiducial may be an object placed In a field of view of the imaging device and which appears in a target Image (hereinafter referred to as an image) produced, for use as a point of reference.
  • the image captured by the imaging device is processed to identify the fiducials within the image.
  • a first, a second, and a third pre-defined region of Interest (ROI) are extracted from the image.
  • the ROIs may be specific grayscale patterns defined within the image.
  • SNR signal-to-noise ratio
  • the variance is computed for the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR.
  • a pre-defined weight is associated with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values.
  • the pre-defined weight may be a value for being associated with the first SNR, the second SNR, and the third SNR, These weighted SNR values are utilized to perform a linear regression to evaluate the dynamic range of the imaging device.
  • the dynamic range so evaluated is compared with a pre-defined set of dynamic ranges to determine whether the dynamic range of the imaging device is meeting the Imaging quality demands or not.
  • the above- described procedure of capturing the Image and computations may be repeated for up to three to ascertain the dynamic range of the Imaging device.
  • the present subject matter further facilitates in evaluating the dynamic range of the imaging device by segmenting the target image in lesser numbers of grayscale patterns. As a result, the computations involved in the evaluation of the dynamic range are reduced. Accordingly, the techniques of the present subject matter are cost- efficient and save on a computation time.
  • FIG. 1 illustrates a computing device 100 for evaluating a dynamic range of an imaging device (not shown), according to an example.
  • the computing device 100 may include a laptop, a smartphone, a tablet, a notebook computer, and the like.
  • the imaging device may Include, but Is not limited to, a camera and a scanner. The imaging device may be able to capture, store, and manipulate an image.
  • the computing device 100 may include a processor and a memory coupled to the processor.
  • the processor may include microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any other devices that manipulate signals and data based on computer-readable Instructions.
  • functions of the various elements shown in the figures, including any functional blocks labeled as "processors)" may be provided through the use of dedicated hardware as well as hardware capable of executing computer-readable instructions.
  • the memory communicatively coupled to the processor, can include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • DRAM dynamic random access memory
  • non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • the computing device 100 may also include Interface(s).
  • the interface(s) may include a variety of interfaces, for example, interfaces for users.
  • the interface(s) may include data output devices.
  • the interface (s) facilitate the communication of the computing device 100 with various communication and computing devices and various communication networks, such as networks that use a variety of protocols, for example, Real Time Streaming Protocol (RTSP), Hypertext Transfer Protocol (HTTP) Live Streaming (HLS) and Real-time Transport Protocol (RTP).
  • RTSP Real Time Streaming Protocol
  • HTTP Hypertext Transfer Protocol
  • HLS Live Streaming
  • RTP Real-time Transport Protocol
  • the computing device 100 may include a dynamic range evaluation engine 102 (hereinafter referred to as the evaluation engine 102).
  • the evaluation engine 102 includes routines, programs, objects, components, and data structures, which perform particular tasks or Implement particular abstract data types.
  • the evaluation engine 102 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions.
  • the evaluation engine 102 can be implemented by hardware, by computer-readable instructions executed by a processing unit, or by a combination thereof.
  • the evaluation engine 102 may include programs or coded instructions that supplement the applications or functions performed by the computing device 100.
  • the computing device 100 may include data.
  • the data may include a region of Interest (ROI) data, a variance data, a dynamic range data, and other data.
  • ROI region of Interest
  • the other data may include data generated and saved by the evaluation engine 102 for implementing various functionalities of the computing device 100.
  • the evaluation engine 102 receives an image captured by the imaging device.
  • the evaluation engine 102 processes the image to extract a plurality of pre-defined ROIs from the image.
  • the plurality of pre-defined ROIs includes a first pre-defined ROI, a second pre-defined ROI, and a third predefined ROI.
  • the plurality of ROIs is defined at the time of designing the image, For instance, the first, the second, and the third ROIs are defined such that the first ROI, the second ROI, and the third ROI has different optical density.
  • the present subject matter is explained with reference to the first ROI, the second ROI, and the third ROI, the number of the pre-defined ROIs may vary based on the size of the image and imaging quality demands.
  • the image device may be associated with flduciais such that the flduciais facilitate in determining a location of the first, the second, and the third pre-defined ROIs in the image. Therefore, to extract the first, the second, and the third pre-defined ROIs, the evaluation engine 102 first identifies a location of the fiducials in the Image. To do so, the evaluation engine 102 segments the image into a plurality of black and white pixels based on a thresholding approach. Details pertaining to the thresholding approach are described in conjunction with FIGS. 2A and 2B. Segmentation of the image based on the thresholding approach results in the fiducials being represented by the black pixels in the image.
  • the evaluation engine 102 may perform noise cancellation on the image to Identify the black pixels representing the fiducials in an accurate manner. Upon identification of the black pixels, the evaluation engine 102 extracts absolute coordinates of the black pixels. For instance, the absolute coordinates are extracted from a centroid of the black pixels.
  • the evaluation engine 102 Based on the absolute coordinates of the black pixels, the evaluation engine 102 obtains absolute coordinates of the first, the second, and the third pre-defined ROIs. In an example, the evaluation engine 102 obtains the absolute coordinates of the first, the second, and the third pre-defined ROIs by performing an inverse operation of a keystone correction technique on the absolute coordinates of the black pixels. Details pertaining to the inverse operation of the keystone correction technique is described In conjunction with FIG. 2A. In an example, to obtain the absolute coordinates, the evaluation engine 102 obtains relative coordinates of the first, the second, and the third pre-defined ROIs, such as from the memory of the computing device 100.
  • the evaluation engine 102 utilizes the relative coordinates of the first, the second, and the third pre-defined ROIs and the absolute coordinates of the black pixels representing the fiducials to obtain the absolute coordinates of the first, the second, and the third pre-defined ROIs. Accordingly, the evaluation engine 102 extracts the first, the second, the third predefined ROIs from the Image.
  • the evaluation engine 102 calculates a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third pre-defined ROI.
  • SNR signal-to-noise ratio
  • the first SNR, the second SNR, and the third SNR is calculated based on a luminance channel associated with the first pre-defined ROI, the second pre-defined ROI, and the third predefined ROI.
  • the SNR may be calculated as:
  • L* is a luminance channel associated with the first pre-defined ROI
  • L* mean is an average L* computed based on a size of the first pre-defined ROI
  • L'stdev is the standard deviation of the L* for the first pre-defined ROI.
  • the SNR is based on, but is not limited to, a luminance of the image.
  • the evaluation engine 102 may also compute the first SNR, the second SNR, and the third SNR based on other channel parameters, such as red, blue, or green channels of the image.
  • the evaluation engine 102 computes a variance of the first
  • the evaluation engine 102 performs a linear regression of the first SNR, the second SNR, and the third SNR to compute the variance.
  • the evaluation engine 102 computes the variance between the first SNR, the second SNR, and the third SNR by calculating chain pair distances between the first SNR, the second SNR, and the third SNR.
  • the evaluation engine 102 stores the data pertaining to variance of the first SNR, the second SNR, and the third SNR as the variance data in the memory of the computing device 100,
  • the evaluation engine 102 associates a predefined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values for the first, the second, and the third pre-defined ROIs.
  • the pre-defined weight may be either 0 or 1.
  • a weight of 0 is assigned to the first SNR. Assigning a weight of 0 indicates that the first SNR is considered as noise and is discarded from further computation.
  • the second SNR and the third SNR with less variation is associated with the weight of 1.
  • the evaluation engine 102 performs a linear regression on the weighted SNR values, Based on the linear regression, the evaluation engine 102 identifies the dynamic range of the imaging device. Thereafter, the evaluation engine 102 compares the dynamic range of the Imaging device with a pre-defined set of dynamic ranges. If the dynamic range of the Imaging device falls within the pre-defined set of dynamic ranges, the evaluation engine 102 generates a success report for the imaging device, The success report indicates that the imaging device meets the imaging quality demands. If the dynamic range of the Imaging device does not fall within the predefined set of dynamic ranges, the evaluation engine 102 generates a failure report for the imaging device. The failure report indicates that the imaging device does not meet the imaging quality demands. In an example, the evaluation engine 102 stores the dynamic range of the imaging device and the pre-defined set of dynamic ranges as the dynamic range data In the memory of the computing device 100,
  • the above described evaluation procedure may be repeated for three target images, to confirm the dynamic range of the imaging device. Accordingly, the computations can be performed quicker. Further, as the present subject matter Involves performing computations on smaller set of ROIs, such as the first pre-defined ROI and the second pre-defined ROI, the computations are less complex. Furthermore, a testing facility employed for the present subject matter Involves a target image segmented into lesser number of grayscale patterns, accordingly the present subject matter provides a cost- efficient technique for evaluating the dynamic range of the imaging device.
  • FIG. 2A illustrates a computing device 200 for evaluating a dynamic range of an imaging device 202
  • FIG. 2B illustrates a target image 204 for evaluating a dynamic range of the Imaging device 202, according to an example.
  • FIGS. 2A and 2 ⁇ are described together.
  • the computing device 200 includes the imaging device 202 for capturing the target image 204 (as shown in FIG. 2B) (hereinafter referred to as the image 204).
  • the computing device 200 further includes a dynamic range evaluation engine 206 (hereinafter referred to as the evaluation engine 206).
  • the computing device 200 and the evaluation engine 206 is same as the computing device 100 and the evaluation engine 102 is respectively described through the description of FIG. 1.
  • the Imaging device 202 may include, but is not limited to, a camera, a scanner, and a webcam. Although the Imaging device 202 is shown to be integrated within the computing device 200, the imaging device 202 may be located externally to the computing device 200. The imaging device 202 may be coupled to the computing device 200 through a communication link. The communication link may be wireless or a wired communication link, Further, the imaging device 202 is being associated with fiducials 208. In an example, the fiducials 208 may be marks on the imaging device 202 or may be an external component placed In a field of view of the imaging device 202.
  • the fiducials 208 facilitate in determining location of a plurality of predefined regions of interest (ROIs) 210-1 , 210-2 210-N, captured in the image
  • the plurality of pre-defined ROIs is collectively referred to as pre-defined ROIs 210.
  • the pre-defined ROIs 210 are specific grayscale patterns defined within the image 204.
  • the pre-defined ROIs 210 may include a first pre-defined ROI 210-1 , a second pre-defined ROI 210-2, and a third pre-defined ROI 210-3.
  • four fiducials may be used for determining the location of the pre-defined ROIs 210 In the image 204.
  • the number of fiducials 208 may be increased or decreased based on a level of accuracy desired from the image 204.
  • the fiducials 208 may be used to determine how accurately coordinates of the pre-defined ROIs are identified.
  • two fiducials may provide x and y coordinates
  • three fiducials may be used to perform an Affine transformation
  • four fiducials may facilitate in performing Projective transformation
  • or more ftducials to facilitate in lens distortion correction, etc.
  • the plurality of pre-defined ROIs 210 is defined at the time of designing the image 204 and relative coordinates of the pre-defined ROIs 210 are identified,
  • the image 204 may include twelve predefined ROIs 210.
  • the evaluation engine 206 processes the image 204 to extract the twelve pre-defined ROIs 210.
  • the twelve ROIs are defined based on a set of optical densities. For instance, to select the twelve ROIs, a graph is plotted between the set of optical densities along Y axis and SNR associated with corresponding optical densities from the set of optical densities along X axis. If the graph illustrates a smooth curve, regions corresponding to the set of optical densities are selected as ROIs.
  • the curve is not smooth, the optical densities, from the set of optical densities, that lie outside the curve are identified and replaced with higher or lower optical densities. Thereafter, the curve is again plotted to confirm a smooth decay In the curve.
  • the above- described selection technique is implemented on a set of imaging devices to obtain unbiased set of results.
  • the ROIs are defined in the image 204 which are then used for evaluation of the dynamic range.
  • the evaluation engine 206 stores data pertaining to the pre-defined ROIs 210, such as number of ROIs, the relative coordinates of the ROIs, and the optical densities as the ROI data in a memory of the computing device 200.
  • the evaluation engine 206 To extract the pre-defined ROIs 210 from the image 204, the evaluation engine 206 first extracts the fiducials 208 associated with the image 204. As mentioned earlier, four fiducials may be used for determining the location of the pre-defined ROIs 210 In the Image 204. In operation, upon receiving the image 204, the evaluation engine 206 processes the image 204 to identify the fiducials 208. For example, the image 204 is processed through a thresholding approach in which the image 204 is segmented in a plurality of pixels. Thereafter, each of the plurality of pixels is replaced with a black pixel or a white pixel based on a grey value of the pixel. The grey value Indicates brightness of a pixel.
  • each grey value ranges from 0-255.
  • a threshold value of 128 is defined. If the grey value of a pixel exceeds 128, the pixel is replaced with a black pixel. If the grey value of a pixel is below 128, the pixel is replaced with as a white pixel.
  • the evaluation engine 206 processes the image 204 to remove noise from the image 204 to Identify the black pixels 212 representing the fiducials 208.
  • the evaluation engine 206 performs noise removal techniques on the image 204.
  • the noise removal techniques may include, but are not limited to, dilation, erosion, and Gaussian blur.
  • the evaluation engine 206 may also take into consideration a size of the fiducials 208 to identify the black pixels 212 representing the fiducials 208 in the image 204,
  • the evaluation engine 206 Upon identification of the black pixels 212, the evaluation engine 206 extracts absolute coordinates of the black pixels 212.
  • the fiducials 208 may be circular in shape. Accordingly, the evaluation engine 206 extracts the absolute coordinates of the fiducials 208 from a centroid of the black pixels 212.
  • the fiducials 208 may be of any shape, such as rectangles, triangles, and bars. In case of rectangles, the evaluation engine 206 may extract the absolute coordinates of the fiducials 208 with respect to corners of the black pixels.
  • the evaluation engine 206 retrieves the relative coordinates of pre-defined ROIs 210 from the memory of the computing device 200.
  • the evaluation engine 206 utilizes the relative coordinates of the predefined ROIs 210 and the absolute coordinates of the black pixels 212 to determine the absolute coordinates of the pre-defined ROIs 210,
  • the absolute coordinates of the pre-defined ROIs 210 are determined by performing an inverse operation of a keystone correction on the absolute coordinates of the black pixels 212 and the relative coordinates of the pre-defined ROts 210.
  • the keystone correction Is a function that skews an image to make the image rectangular.
  • a rectangular image is converted into a trapezoid. The inverse keystone correction enables in obtaining accurate position of the pre-defined ROIs 210 relative to the fiducials 208.
  • the evaluation engine 206 may apply an Affine transformation to convert the relative coordinates of the pre-defined ROIs 210 and the absolute coordinates of the black pixels 212, Into absolute coordinates of the pre-defined ROIs 210. Accordingly, the evaluation engine 206 extracts predefined ROIs 210 from the image 204.
  • the evaluation engine 206 Upon extraction of the pre-defined ROIs 210, the evaluation engine 206 calculates a first SNR for the first pre-defined ROI 210-1 , a second SNR for the second pre-defined ROI 210-2, and the third SNR for the third pre-defined ROI 210-3. In an example, the first SNR, the second SNR, and the third SNR is calculated based on a luminance of the first pre-defined ROI 210-1 , the second pre-defined ROI 210-2, and the third pre-defined ROI 210*3 respectively. In an example, the evaluation engine 206 stores the data pertaining to SNR of the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI as the SNR 214 In the memory of the computing device 200.
  • the evaluation engine 206 employs the SNR 214 to compute a variance of the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR.
  • the computation of the variance may be indicative of usefulness of the SNR of the pre-defined ROIs 210 for computation of the dynamic range.
  • the evaluation engine 206 stores the data pertaining to variance of the first SNR, the second SNR, and the third SNR as the variance 216 in the memory of the computing device 200. For example, the evaluation engine 206 performs a linear regression on first SNR, the second SNR, and the third SNR to compute the variance.
  • a regression line is drawn along the first SNR, the second SNR, and the third SNR.
  • the evaluation engine 206 determines the variance for the first pre-defined ROI 210-1 , the second pre-defined ROI 210-2, and the third pre-defined ROI based on a distance of the first SNR, the second SNR, and the third SNR from the regression line.
  • the evaluation engine 206 calculates chain pair distances between the first SNR, the second SNR, and the third SNR to compute the variance. In this case, the evaluation engine 206 plots the first SNR, the second SNR, and the third SNR with respect to each other. Thereafter, the evaluation engine 206 computes a distance of the first SNR with respect to the second SNR, the first SNR with respect to the third SNR, and the second SNR with respect to the third SNR. The pre-defined ROI having SNR with the largest distance from SNR of other ROIs is considered as noise by the evaluation engine 206.
  • the evaluation engine 206 associates a predefined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values.
  • the pre-defined weight may be either 0 or 1.
  • the evaluation engine 206 evaluates a dynamic range of the imaging device 202.
  • the evaluation engine 206 performs a linear regression on the weighted SNR values to obtain the dynamic range.
  • the evaluation engine 206 compares the dynamic range of the imaging device 202 with a pre-defined set of dynamic ranges. Based on the comparison, the evaluation engine 206 indicates whether the imaging device 202 meets the imaging quality demands or not. In an example, the evaluation engine 206 stores the dynamic range so computed as the dynamic range 21 ⁇ in the memory of the computing device 200.
  • the above-described procedure for evaluating the dynamic range may be repeated to confirm the dynamic range of the Imaging device 202.
  • three repetitions of the evaluation procedure provide consistent results pertaining to the dynamic range.
  • the present subject matter facilitates in evaluating high dynamic range (MDR) of imaging devices, such as the imaging device 202.
  • MDR high dynamic range
  • the above-described procedure enhances image contrast of an image even when a HDR feature is turned off in the imaging device.
  • FIG. 3 illustrates a system environment 300 implementing a non- transitory computer readable medium for evaluating a dynamic range, according to an example.
  • the system environment 300 includes a processor 302 communicatively coupled to the non-transitory computer-readable medium 304 through a communication link 306.
  • the processor 302 may be a processor of a computing device for fetching and executing computer-readable instructions from the non-transitory computer-readable medium 304.
  • the non-transitory computer-readable medium 304 can be, for example, an internal memory device or an external memory device.
  • the communication link 306 may be a direct communication link, such as any memory read/write interface.
  • the communication link 306 may be an indirect communication link, such as a network interface.
  • the processor 302 can access the non-transitory computer-readable medium 304 through a communication network (not shown).
  • the non-transitory computer-readable medium 304 includes a set of computer-readable instructions for printing on the print medium.
  • the set of computer-readable instructions may include instructions as explained in conjunction with FIGS. 1-2B.
  • the set of computer-readable instructions can be accessed by the processor 302 through the communication link 306 and subsequently executed to perform acts for evaluating the dynamic range of an image.
  • the non-transitory computer- readable medium 304 may include instructions 308 to extract absolute coordinates of fiducials associated with an image captured by an imaging device.
  • the image may include a first pre-defined region of interest (ROI), a second predefined ROI, and a third pre-defined ROI.
  • the fiducials may act as reference points to determine a location of the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI.
  • the image may be processed using a threshold approach. As a result, a location of the fiduclals may be identified in the image.
  • the non-transitory computer-readable medium 304 may include instructions 310 to obtain relative coordinates of the first, the second, and the third pre-defined ROI based on the absolute coordinates of the fiduclals.
  • the relative coordinates along with the absolute coordinates of the fiduclals define a boundary of the image.
  • the non-transitory medium 304 may include instructions 312 to transform the relative coordinates of the first, the second, and the third predefined ROI into absolute coordinates of the first, the second, and the third predefined ROIs to extract the first, the second, and the third pre-defined ROIs from the image.
  • the non-transitory medium 304 may further include instructions 314 to calculate a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third predefined ROI. Thereafter, a variance of the first SNR Is computed with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR.
  • the non-transitory medium 304 may include instructions 316 to associate a pre-defined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values.
  • the pre-defined weight may be between 0 and 1 and may be based on the variation between the first SNR, the second SNR, and the third SNR.
  • the non-transitory medium 304 may further Include instructions 318 to perform a linear regression on the weighted SNR values to evaluate a dynamic range of the imagine device.
  • the non-transitory computer-readable medium 304 may include instructions to obtain the absolute coordinates of the first, the second, and the third pre-defined ROIs by performing an inverse operation of a keystone correction on the absolute coordinates of the fiducials,
  • the non-transitory computer-readable medium 304 may further include instructions to cause the processor to segment the Image into the first, the second, and the third predefined ROIs such that the first, the second, and the third pre-defined ROI corresponds to a different optical density.

Abstract

In an example, a computing device extracts a first pre-defined region of interest (ROI), a second pre-defined ROI, and a third pre-defined ROI from an image captured by the imaging device. Further, a first signal-to-noise ratio (SNR), a second SNR, and a third SNR is calculated for the first, the second, and the third pre-defined ROIs respectively. The computing device computes a variance of the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR. Based on the variance, a predefined weight is associated with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values. Based on the weighted SNR values, the computing device evaluates the dynamic range of the imaging device.

Description

EVALUATION OF DYNAMIC RANGES OF IMAGING DEVICES
BACKGROUND
[0001] Dynamic range of an imaging device, such as a camera, a scanner, or a webcam, indicates a sensitivity of the Imaging device. Dynamic range of an imaging device illustrates the difference between a maximum and a minimum strength of signals that are detectable by the imaging device.
BRIEF DESCRIPTION OF DRAWINGS
[0002] The following detailed description references the drawings, wherein:
[0003] FIG. 1 Illustrates a computing device for evaluating a dynamic range of an imaging device, according to an example;
[0004] FIG. 2A illustrates a computing device for evaluating a dynamic range of an imaging device, according to an example;
[0005] FIG. 2B illustrates a target Image for evaluating a dynamic range of an imaging device, according to an example; and
[0006] FIG. 3 illustrates a system environment implementing a norvtransitory computer readable medium for evaluating a dynamic range of an imaging device, according to an example.
DETAILED DESCRIPTION
[0007] Generally, dynamic range of an imaging device is measured to determine whether the imaging device meets imaging quality demands. Measurement of the dynamic range of the imaging device provides different outcomes when the imaging device is tested as a stand-alone unit compared with when the imaging device Is tested as a part of a system. For example, when an imaging device is mounted in a system, an enclosure of the system may increase heat and/or reduce light transmission due to optical signal decay resulting from a material surface positioned before the imaging device. This may introduce noise and consequently may degrade the dynamic range of the imaging device.
[0008] The dynamic range is generally evaluated within a testing facility. The imaging device may be mounted on a stand such that the imaging device Is able to view a target. To be able to evaluate the dynamic range, the imaging device captures a target image at most 10 times and the dynamic range is computed for every target image. Further, to evaluate the dynamic range, the target image is segmented into no less than 25 grayscale patterns to evaluate the dynamic range. As a result, the target image is designed to be able to accommodate the 25 grayscale patterns, thereby a size of the target image Increases. A target image with increased size may Increase cost of the testing facility. Moreover, to achieve accurate results, the target image has to be uniformly illuminated. Providing uniform illumination to the target Image may add on to the cost of the testing facility. In addition, analysis of at least 25 grayscale patterns may make dynamic range computations complex and time consuming, especially when the computations have to be performed at least 10 times.
[0009] The present subject matter describes techniques for evaluating the dynamic range of an imaging device. The techniques of the present subject matter facilitate in evaluating a dynamic range of the imaging device, such as a camera or a scanner, in a faster and accurate manner,
[0010] According to an aspect, the imaging device may be associated with multiple fid u dais. A fiducial may be an object placed In a field of view of the imaging device and which appears in a target Image (hereinafter referred to as an image) produced, for use as a point of reference. The image captured by the imaging device is processed to identify the fiducials within the image. Based on the fiducials, a first, a second, and a third pre-defined region of Interest (ROI) are extracted from the image. The ROIs may be specific grayscale patterns defined within the image. Thereafter, for the first, the second, and the third pre-defined ROI, a first, a second, and a third signal-to-noise ratio (SNR) is calculated to compute variance. For example, the variance is computed for the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR. Based on the variance, a pre-defined weight is associated with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values. The pre-defined weight may be a value for being associated with the first SNR, the second SNR, and the third SNR, These weighted SNR values are utilized to perform a linear regression to evaluate the dynamic range of the imaging device.
[0011] The dynamic range so evaluated is compared with a pre-defined set of dynamic ranges to determine whether the dynamic range of the imaging device is meeting the Imaging quality demands or not. In an example, if the dynamic range so calculated does not meet the imaging quality demands, the above- described procedure of capturing the Image and computations may be repeated for up to three to ascertain the dynamic range of the Imaging device. The present subject matter further facilitates in evaluating the dynamic range of the imaging device by segmenting the target image in lesser numbers of grayscale patterns. As a result, the computations involved in the evaluation of the dynamic range are reduced. Accordingly, the techniques of the present subject matter are cost- efficient and save on a computation time.
[0012] The present subject matter is further described with reference to the accompanying figures. Wherever possible, the same reference numerals are used In the figures and the following description to refer to the same or similar parts. It should be noted that the description and figures merely illustrate principles of the present subject matter. It is thus understood that various arrangements may be devised that, although not explicitly described or shown herein, encompass the principles of the present subject matter. Moreover, all statements herein reciting principles, aspects, and examples of the present subject matter, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0013] FIG. 1 illustrates a computing device 100 for evaluating a dynamic range of an imaging device (not shown), according to an example. In an example, the computing device 100 may include a laptop, a smartphone, a tablet, a notebook computer, and the like. Further, the imaging device may Include, but Is not limited to, a camera and a scanner. The imaging device may be able to capture, store, and manipulate an image.
[0014] In one example, the computing device 100 may include a processor and a memory coupled to the processor. The processor may include microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any other devices that manipulate signals and data based on computer-readable Instructions. Further, functions of the various elements shown in the figures, including any functional blocks labeled as "processors)", may be provided through the use of dedicated hardware as well as hardware capable of executing computer-readable instructions.
[0015] The memory, communicatively coupled to the processor, can include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0016] The computing device 100 may also include Interface(s). The interface(s) may include a variety of interfaces, for example, interfaces for users. The interface(s) may include data output devices. The interface (s) facilitate the communication of the computing device 100 with various communication and computing devices and various communication networks, such as networks that use a variety of protocols, for example, Real Time Streaming Protocol (RTSP), Hypertext Transfer Protocol (HTTP) Live Streaming (HLS) and Real-time Transport Protocol (RTP).
[0017] Further, the computing device 100 may include a dynamic range evaluation engine 102 (hereinafter referred to as the evaluation engine 102). The evaluation engine 102, amongst other things, includes routines, programs, objects, components, and data structures, which perform particular tasks or Implement particular abstract data types. The evaluation engine 102 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the evaluation engine 102 can be implemented by hardware, by computer-readable instructions executed by a processing unit, or by a combination thereof. In one example, the evaluation engine 102 may include programs or coded instructions that supplement the applications or functions performed by the computing device 100.
[0018] In an exa mple, the computing device 100 may include data. The data may Include a region of Interest (ROI) data, a variance data, a dynamic range data, and other data. The other data may include data generated and saved by the evaluation engine 102 for implementing various functionalities of the computing device 100.
[0019] In an example, the evaluation engine 102 receives an image captured by the imaging device. The evaluation engine 102 processes the image to extract a plurality of pre-defined ROIs from the image. The plurality of pre-defined ROIs includes a first pre-defined ROI, a second pre-defined ROI, and a third predefined ROI. In an example, the plurality of ROIs is defined at the time of designing the image, For instance, the first, the second, and the third ROIs are defined such that the first ROI, the second ROI, and the third ROI has different optical density. Though the present subject matter is explained with reference to the first ROI, the second ROI, and the third ROI, the number of the pre-defined ROIs may vary based on the size of the image and imaging quality demands.
[0020] Further, the image device may be associated with flduciais such that the flduciais facilitate in determining a location of the first, the second, and the third pre-defined ROIs in the image. Therefore, to extract the first, the second, and the third pre-defined ROIs, the evaluation engine 102 first identifies a location of the fiducials in the Image. To do so, the evaluation engine 102 segments the image into a plurality of black and white pixels based on a thresholding approach. Details pertaining to the thresholding approach are described in conjunction with FIGS. 2A and 2B. Segmentation of the image based on the thresholding approach results in the fiducials being represented by the black pixels in the image.
[0021] Further, to remove any noise from the segmented image, the evaluation engine 102 may perform noise cancellation on the image to Identify the black pixels representing the fiducials in an accurate manner. Upon identification of the black pixels, the evaluation engine 102 extracts absolute coordinates of the black pixels. For instance, the absolute coordinates are extracted from a centroid of the black pixels.
[0022] Based on the absolute coordinates of the black pixels, the evaluation engine 102 obtains absolute coordinates of the first, the second, and the third pre-defined ROIs. In an example, the evaluation engine 102 obtains the absolute coordinates of the first, the second, and the third pre-defined ROIs by performing an inverse operation of a keystone correction technique on the absolute coordinates of the black pixels. Details pertaining to the inverse operation of the keystone correction technique is described In conjunction with FIG. 2A. In an example, to obtain the absolute coordinates, the evaluation engine 102 obtains relative coordinates of the first, the second, and the third pre-defined ROIs, such as from the memory of the computing device 100. As the first ROI, the second ROI, and the third ROI is pre-defined, the relative coordinates corresponding to the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI are known. Thereafter, the evaluation engine 102 utilizes the relative coordinates of the first, the second, and the third pre-defined ROIs and the absolute coordinates of the black pixels representing the fiducials to obtain the absolute coordinates of the first, the second, and the third pre-defined ROIs. Accordingly, the evaluation engine 102 extracts the first, the second, the third predefined ROIs from the Image.
[0023] Upon extraction of the first, the second, and the third pre-defined ROIs, the evaluation engine 102 calculates a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third pre-defined ROI. In an example, the first SNR, the second SNR, and the third SNR is calculated based on a luminance channel associated with the first pre-defined ROI, the second pre-defined ROI, and the third predefined ROI. For example, the SNR may be calculated as:
Figure imgf000009_0001
wherein, L* is a luminance channel associated with the first pre-defined ROI L* mean is an average L* computed based on a size of the first pre-defined ROI, and
L'stdev is the standard deviation of the L* for the first pre-defined ROI.
[0024] The computation of the first SNR, the second SNR, and the third
SNR is based on, but is not limited to, a luminance of the image. In an example, the evaluation engine 102 may also compute the first SNR, the second SNR, and the third SNR based on other channel parameters, such as red, blue, or green channels of the image.
[0025] Further, the evaluation engine 102 computes a variance of the first
SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR. In an example, the evaluation engine 102 performs a linear regression of the first SNR, the second SNR, and the third SNR to compute the variance. In another example, the evaluation engine 102 computes the variance between the first SNR, the second SNR, and the third SNR by calculating chain pair distances between the first SNR, the second SNR, and the third SNR. In an example, the evaluation engine 102 stores the data pertaining to variance of the first SNR, the second SNR, and the third SNR as the variance data in the memory of the computing device 100,
[0026] Based on the variance, the evaluation engine 102 associates a predefined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values for the first, the second, and the third pre-defined ROIs. In an example, the pre-defined weight may be either 0 or 1. For instance, when deviation of the first SNR is more than the second SNR and the third SNR, a weight of 0 is assigned to the first SNR. Assigning a weight of 0 indicates that the first SNR is considered as noise and is discarded from further computation. The second SNR and the third SNR with less variation is associated with the weight of 1. Although the present subject matter Is described with reference to the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI, the pre-defined ROIs may be more In number.
[0027] In an example, the evaluation engine 102 performs a linear regression on the weighted SNR values, Based on the linear regression, the evaluation engine 102 identifies the dynamic range of the imaging device. Thereafter, the evaluation engine 102 compares the dynamic range of the Imaging device with a pre-defined set of dynamic ranges. If the dynamic range of the Imaging device falls within the pre-defined set of dynamic ranges, the evaluation engine 102 generates a success report for the imaging device, The success report indicates that the imaging device meets the imaging quality demands. If the dynamic range of the Imaging device does not fall within the predefined set of dynamic ranges, the evaluation engine 102 generates a failure report for the imaging device. The failure report indicates that the imaging device does not meet the imaging quality demands. In an example, the evaluation engine 102 stores the dynamic range of the imaging device and the pre-defined set of dynamic ranges as the dynamic range data In the memory of the computing device 100,
[002B] In an example, the above described evaluation procedure may be repeated for three target images, to confirm the dynamic range of the imaging device. Accordingly, the computations can be performed quicker. Further, as the present subject matter Involves performing computations on smaller set of ROIs, such as the first pre-defined ROI and the second pre-defined ROI, the computations are less complex. Furthermore, a testing facility employed for the present subject matter Involves a target image segmented into lesser number of grayscale patterns, accordingly the present subject matter provides a cost- efficient technique for evaluating the dynamic range of the imaging device.
[0029] The above aspects and further details are described in conjunction with FIGS. 2A and 2B. FIG. 2A illustrates a computing device 200 for evaluating a dynamic range of an imaging device 202 and FIG. 2B illustrates a target image 204 for evaluating a dynamic range of the Imaging device 202, according to an example. For the sake of brevity, FIGS. 2A and 2Θ are described together. The computing device 200 includes the imaging device 202 for capturing the target image 204 (as shown in FIG. 2B) (hereinafter referred to as the image 204). The computing device 200 further includes a dynamic range evaluation engine 206 (hereinafter referred to as the evaluation engine 206). The computing device 200 and the evaluation engine 206 is same as the computing device 100 and the evaluation engine 102 is respectively described through the description of FIG. 1.
[0030] The Imaging device 202 may include, but is not limited to, a camera, a scanner, and a webcam. Although the Imaging device 202 is shown to be integrated within the computing device 200, the imaging device 202 may be located externally to the computing device 200. The imaging device 202 may be coupled to the computing device 200 through a communication link. The communication link may be wireless or a wired communication link, Further, the imaging device 202 is being associated with fiducials 208. In an example, the fiducials 208 may be marks on the imaging device 202 or may be an external component placed In a field of view of the imaging device 202.
[0031] The fiducials 208 facilitate in determining location of a plurality of predefined regions of interest (ROIs) 210-1 , 210-2 210-N, captured in the image
204. The plurality of pre-defined ROIs is collectively referred to as pre-defined ROIs 210. The pre-defined ROIs 210 are specific grayscale patterns defined within the image 204. The pre-defined ROIs 210 may include a first pre-defined ROI 210-1 , a second pre-defined ROI 210-2, and a third pre-defined ROI 210-3. In an example of the present subject matter, four fiducials may be used for determining the location of the pre-defined ROIs 210 In the image 204. The number of fiducials 208 may be increased or decreased based on a level of accuracy desired from the image 204. In an example, the fiducials 208 may be used to determine how accurately coordinates of the pre-defined ROIs are identified. For example, two fiducials may provide x and y coordinates, three fiducials may be used to perform an Affine transformation, four fiducials may facilitate in performing Projective transformation, or more ftducials to facilitate in lens distortion correction, etc.
[0032] The plurality of pre-defined ROIs 210 is defined at the time of designing the image 204 and relative coordinates of the pre-defined ROIs 210 are identified, In an example, the image 204 may include twelve predefined ROIs 210. The evaluation engine 206 processes the image 204 to extract the twelve pre-defined ROIs 210. The twelve ROIs are defined based on a set of optical densities. For instance, to select the twelve ROIs, a graph is plotted between the set of optical densities along Y axis and SNR associated with corresponding optical densities from the set of optical densities along X axis. If the graph illustrates a smooth curve, regions corresponding to the set of optical densities are selected as ROIs.
[0033] On the other hand, if the curve is not smooth, the optical densities, from the set of optical densities, that lie outside the curve are identified and replaced with higher or lower optical densities. Thereafter, the curve is again plotted to confirm a smooth decay In the curve. In an example, the above- described selection technique is implemented on a set of imaging devices to obtain unbiased set of results. Upon receiving the smooth curve of the optical densities with respect to the SNR, the ROIs are defined in the image 204 which are then used for evaluation of the dynamic range. In an example, the evaluation engine 206 stores data pertaining to the pre-defined ROIs 210, such as number of ROIs, the relative coordinates of the ROIs, and the optical densities as the ROI data in a memory of the computing device 200.
[0034] To extract the pre-defined ROIs 210 from the image 204, the evaluation engine 206 first extracts the fiducials 208 associated with the image 204. As mentioned earlier, four fiducials may be used for determining the location of the pre-defined ROIs 210 In the Image 204. In operation, upon receiving the image 204, the evaluation engine 206 processes the image 204 to identify the fiducials 208. For example, the image 204 is processed through a thresholding approach in which the image 204 is segmented in a plurality of pixels. Thereafter, each of the plurality of pixels is replaced with a black pixel or a white pixel based on a grey value of the pixel. The grey value Indicates brightness of a pixel. For example, in case of an 8-bit grayscale image, each grey value ranges from 0-255. To replace the plurality of pixels with the black pixel and the white pixel, a threshold value of 128 is defined. If the grey value of a pixel exceeds 128, the pixel is replaced with a black pixel. If the grey value of a pixel is below 128, the pixel is replaced with as a white pixel.
[0035] Once all the pixels are replaced with white and black pixels, the evaluation engine 206 processes the image 204 to remove noise from the image 204 to Identify the black pixels 212 representing the fiducials 208. In an example, the evaluation engine 206 performs noise removal techniques on the image 204. The noise removal techniques may include, but are not limited to, dilation, erosion, and Gaussian blur. In an example, the evaluation engine 206 may also take into consideration a size of the fiducials 208 to identify the black pixels 212 representing the fiducials 208 in the image 204,
[0036] Upon identification of the black pixels 212, the evaluation engine 206 extracts absolute coordinates of the black pixels 212. In an example, the fiducials 208 may be circular in shape. Accordingly, the evaluation engine 206 extracts the absolute coordinates of the fiducials 208 from a centroid of the black pixels 212. Although the present subject matter describes the fiducials 208 as circles, the fiducials 208 may be of any shape, such as rectangles, triangles, and bars. In case of rectangles, the evaluation engine 206 may extract the absolute coordinates of the fiducials 208 with respect to corners of the black pixels. Upon extracting the absolute coordinates of the black pixels 212, the evaluation engine 206 retrieves the relative coordinates of pre-defined ROIs 210 from the memory of the computing device 200.
[0037] The evaluation engine 206 utilizes the relative coordinates of the predefined ROIs 210 and the absolute coordinates of the black pixels 212 to determine the absolute coordinates of the pre-defined ROIs 210, In an example, the absolute coordinates of the pre-defined ROIs 210 are determined by performing an inverse operation of a keystone correction on the absolute coordinates of the black pixels 212 and the relative coordinates of the pre-defined ROts 210. In an example, the keystone correction Is a function that skews an image to make the image rectangular. In the inverse keystone correction, a rectangular image is converted into a trapezoid. The inverse keystone correction enables in obtaining accurate position of the pre-defined ROIs 210 relative to the fiducials 208. In an example, the evaluation engine 206 may apply an Affine transformation to convert the relative coordinates of the pre-defined ROIs 210 and the absolute coordinates of the black pixels 212, Into absolute coordinates of the pre-defined ROIs 210. Accordingly, the evaluation engine 206 extracts predefined ROIs 210 from the image 204.
[0038] Upon extraction of the pre-defined ROIs 210, the evaluation engine 206 calculates a first SNR for the first pre-defined ROI 210-1 , a second SNR for the second pre-defined ROI 210-2, and the third SNR for the third pre-defined ROI 210-3. In an example, the first SNR, the second SNR, and the third SNR is calculated based on a luminance of the first pre-defined ROI 210-1 , the second pre-defined ROI 210-2, and the third pre-defined ROI 210*3 respectively. In an example, the evaluation engine 206 stores the data pertaining to SNR of the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI as the SNR 214 In the memory of the computing device 200.
[0039] Further, the evaluation engine 206 employs the SNR 214 to compute a variance of the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR. The computation of the variance may be indicative of usefulness of the SNR of the pre-defined ROIs 210 for computation of the dynamic range. In an example, the evaluation engine 206 stores the data pertaining to variance of the first SNR, the second SNR, and the third SNR as the variance 216 in the memory of the computing device 200. For example, the evaluation engine 206 performs a linear regression on first SNR, the second SNR, and the third SNR to compute the variance. Accordingly, a regression line is drawn along the first SNR, the second SNR, and the third SNR. Based on the regression line, the evaluation engine 206 determines the variance for the first pre-defined ROI 210-1 , the second pre-defined ROI 210-2, and the third pre-defined ROI based on a distance of the first SNR, the second SNR, and the third SNR from the regression line.
[0040] In another example, the evaluation engine 206 calculates chain pair distances between the first SNR, the second SNR, and the third SNR to compute the variance. In this case, the evaluation engine 206 plots the first SNR, the second SNR, and the third SNR with respect to each other. Thereafter, the evaluation engine 206 computes a distance of the first SNR with respect to the second SNR, the first SNR with respect to the third SNR, and the second SNR with respect to the third SNR. The pre-defined ROI having SNR with the largest distance from SNR of other ROIs is considered as noise by the evaluation engine 206.
[0041] Based on the variance, the evaluation engine 206 associates a predefined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values. In an example, the pre-defined weight may be either 0 or 1. For example, if the first SNR indicates huge variation from the regression line, the evaluation engine 206 associates zero weight to the first SNR. Based on the weighted SNR values, the evaluation engine 206 evaluates a dynamic range of the imaging device 202. For example, the evaluation engine 206 performs a linear regression on the weighted SNR values to obtain the dynamic range. Upon obtaining the dynamic range, the evaluation engine 206 compares the dynamic range of the imaging device 202 with a pre-defined set of dynamic ranges. Based on the comparison, the evaluation engine 206 indicates whether the imaging device 202 meets the imaging quality demands or not. In an example, the evaluation engine 206 stores the dynamic range so computed as the dynamic range 21Θ in the memory of the computing device 200.
[0042] The above-described procedure for evaluating the dynamic range may be repeated to confirm the dynamic range of the Imaging device 202. In an example, three repetitions of the evaluation procedure provide consistent results pertaining to the dynamic range. Further, the present subject matter facilitates in evaluating high dynamic range (MDR) of imaging devices, such as the imaging device 202. The above-described procedure enhances image contrast of an image even when a HDR feature is turned off in the imaging device.
[0043] FIG. 3 illustrates a system environment 300 implementing a non- transitory computer readable medium for evaluating a dynamic range, according to an example. The system environment 300 includes a processor 302 communicatively coupled to the non-transitory computer-readable medium 304 through a communication link 306. In an example, the processor 302 may be a processor of a computing device for fetching and executing computer-readable instructions from the non-transitory computer-readable medium 304.
[0044] The non-transitory computer-readable medium 304 can be, for example, an internal memory device or an external memory device. In an example, the communication link 306 may be a direct communication link, such as any memory read/write interface. In another example, the communication link 306 may be an indirect communication link, such as a network interface. In such a case, the processor 302 can access the non-transitory computer-readable medium 304 through a communication network (not shown).
[0045] In an example, the non-transitory computer-readable medium 304 includes a set of computer-readable instructions for printing on the print medium. The set of computer-readable instructions may include instructions as explained in conjunction with FIGS. 1-2B. The set of computer-readable instructions can be accessed by the processor 302 through the communication link 306 and subsequently executed to perform acts for evaluating the dynamic range of an image.
[0046] Referring to FIG. 3, In an example, the non-transitory computer- readable medium 304 may include instructions 308 to extract absolute coordinates of fiducials associated with an image captured by an imaging device. The image may include a first pre-defined region of interest (ROI), a second predefined ROI, and a third pre-defined ROI. The fiducials may act as reference points to determine a location of the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI. In an example, the image may be processed using a threshold approach. As a result, a location of the fiduclals may be identified in the image.
[0047] The non-transitory computer-readable medium 304 may include instructions 310 to obtain relative coordinates of the first, the second, and the third pre-defined ROI based on the absolute coordinates of the fiduclals. The relative coordinates along with the absolute coordinates of the fiduclals define a boundary of the image, Further, the non-transitory medium 304 may include instructions 312 to transform the relative coordinates of the first, the second, and the third predefined ROI into absolute coordinates of the first, the second, and the third predefined ROIs to extract the first, the second, and the third pre-defined ROIs from the image.
[0048] The non-transitory medium 304 may further include instructions 314 to calculate a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third predefined ROI. Thereafter, a variance of the first SNR Is computed with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR. In addition, the non-transitory medium 304 may include instructions 316 to associate a pre-defined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values. In an example, the pre-defined weight may be between 0 and 1 and may be based on the variation between the first SNR, the second SNR, and the third SNR. The non-transitory medium 304 may further Include instructions 318 to perform a linear regression on the weighted SNR values to evaluate a dynamic range of the imagine device. Although the present figure is described with reference to the first pre-defined ROI and the second predefined ROI, the pre-defined ROIs may be more in number.
[0049] in addition, the non-transitory computer-readable medium 304 may include instructions to obtain the absolute coordinates of the first, the second, and the third pre-defined ROIs by performing an inverse operation of a keystone correction on the absolute coordinates of the fiducials, The non-transitory computer-readable medium 304 may further include instructions to cause the processor to segment the Image into the first, the second, and the third predefined ROIs such that the first, the second, and the third pre-defined ROI corresponds to a different optical density.
[0050] Although examples of the present disclosure have been described in language specific to methods and/or structural features, it is to be understood that the present disclosure is not limited to the specific methods or features described. Rather, the methods and specific features are disclosed and explained as example of the present disclosure.

Claims

We claim:
1 . A computing device comprising:
a dynamic range evaluation engine, to,
extract a plurality of pre-defined regions of interest (ROIs) from an
Image captured by an imaging device, the plurality of pre-defined ROIs comprises a first predefined ROI, a second pre-defined ROI, and a third pre-defined ROI;
calculate a first signal-to-nolse ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third pre-defined ROI;
compute a variance of the first SNR with respect to the second SNR and the third SNR, a variance of the second SNR with respect to the first SNR and the third SNR, and a variance of the third SNR with respect to the first SNR and the second SNR;
based on the variance, associate a pre-defined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values; and
perform linear regression on the weighted SNR values to evaluate the dynamic range of the imaging device.
2. The computing device as claimed In claim 1 , wherein the dynamic range evaluation engine is to:
compare the dynamic range of the imaging device with a pre-defined set of dynamic ranges; and
generate a success report when the dynamic range of the Imaging device falls within the pre-defined set of dynamic ranges,
3. The computing device as claimed in claim 1 , wherein to extract the plurality of pre-defined ROIs, the dynamic range evaluation engine is to:
identify fiducials associated with the image, upon identification of the flducials, extract absolute coordinates of the fiducials;
obtain absolute coordinates of the plurality of pre-defined ROIs, based on the absolute coordinates of the fiducials.
4. The computing device as claimed in claim 3, wherein to identify the fiducials, the dynamic range evaluation engine is to segment the image into a plurality of pixels.
5. The computing device as claimed in claim 3, wherein to obtain the absolute coordinates of the plurality of pre-defined ROIs, the dynamic range evaluation engine is to perform an inverse operation of a keystone correction on the absolute coordinates of the fiducials.
6. The computing device as claimed in claim 4, wherein to segment the image, the dynamic range evaluation engine is to divide the image into the first, the second, and the third pre-defined ROIs such that the first, the second, and the third pre-defined ROIs corresponds to a different optical density.
7. A computing device comprising:
an imaging device for capturing an Image of a target Image, the Imaging device being associated with fiducials;
a dynamic range evaluation engine, to,
receive the image captured by the Imaging device;
obtain absolute coordinates of the fiducials;
based on the absolute coordinates of the fiducials, extract a plurality of pre-defined regions of interest (ROIs) from the image, the plurality of pre-defined ROIs comprises a first pre-defined ROI, a second pre-defined ROI, and a third pre-defined ROI;
calculate a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third pre-defined ROI to compute a variance of the first SNR with respect to the second SNR and the third SNR, a variance of the second SNR with respect to the third SNR and the first SNR, and a variance of the third SNR with respect to the first SNR and the second SNR;
based on the variance, associate a predefined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values; and
evaluate a dynamic range of the imaging device based on the weighted SNR values.
8. The computing device as claimed in claim 7, wherein the dynamic range evaluation engine is to,
obtain absolute coordinates of the plurality of pre-defined ROls, based on the absolute coordinates of the fiducials, to extract the plurality of pre-defined ROls.
9. The computing device as claimed in claim 7, wherein the dynamic range evaluation engine is to determine the first SNR, the second SNR, and the third SNR based on a brightness of the first pre-defined ROI, the second pre-defined ROI, and the third pre-defined ROI respectively.
10. The computing device as claimed in claim 7, wherein the dynamic range evaluation engine is to perform a linear regression to determine the variance of the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR.
1 1. The computing device as claimed In claim 7, wherein the dynamic range evaluation engine is to compute a chain pair distance to determine the variance of the first SNR with respect to the second SNR and the third SNR, the second SNR with respect to the third SNR and the first SNR, and the third SNR with respect to the first SNR and the second SNR.
12. The computing device as claimed in claim 7, wherein the dynamic range evaluation engine is to segment the image into the first, the second, and the third pre-defined ROls such that the first, the second, and the third pre-defined ROls corresponds to a different optical density.
13. A non-transitory computer-readable medium comprising computer-readable instructions, which, when executed by a processor of a computing device, cause the processor to:
extract absolute coordinates of flducials associated with an image captured by an imaging device, wherein the image comprises a first, a second, and a third pre-defined regions of interest (ROIs);
obtain absolute coordinates of the first, the second, and the third predefined ROI based on the absolute coordinates of the flducials to extract the first, the second, and the third pre-defined ROIs from the image;
calculate a first signal-to-noise ratio (SNR) for the first pre-defined ROI, a second SNR for the second pre-defined ROI, and a third SNR for the third predefined ROI to compute a variance of the first SNR with respect to the second SNR and the third SNR, a variance of the second SNR with respect to the third SNR and the first SNR, and a variance of the third SNR with respect to the first SNR and the second SNR;
based on the variance, associate a pre-defined weight with the first SNR, the second SNR, and the third SNR to obtain weighted SNR values; and
perform a linear regression on the weighted SNR values to evaluate a dynamic range of the imaging device.
14. The non-transitory computer-readable medium as claimed in claim 13, wherein the instructions which, when executed by the processor, cause the processor to obtain the absolute coordinates of the first, the second, and the third pre-defined ROIs by performing an Inverse operation of a keystone correction on the absolute coordinates of the flducials.
15. The non-transitory computer-readable medium as claimed in claim 13, wherein the instructions which, when executed by the processor, cause the processor to segment the image into the first, the second, and the third predefined ROIs such that the first, the second, and the third pre-defined ROIs corresponds to a different optical density.
PCT/US2017/043850 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices WO2019022728A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17918930.3A EP3659333A4 (en) 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices
US16/634,106 US10958899B2 (en) 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices
CN201780095351.7A CN111213372B (en) 2017-07-26 2017-07-26 Evaluation of dynamic range of imaging device
PCT/US2017/043850 WO2019022728A1 (en) 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/043850 WO2019022728A1 (en) 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices

Publications (1)

Publication Number Publication Date
WO2019022728A1 true WO2019022728A1 (en) 2019-01-31

Family

ID=65039795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/043850 WO2019022728A1 (en) 2017-07-26 2017-07-26 Evaluation of dynamic ranges of imaging devices

Country Status (4)

Country Link
US (1) US10958899B2 (en)
EP (1) EP3659333A4 (en)
CN (1) CN111213372B (en)
WO (1) WO2019022728A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050474A1 (en) * 2009-01-19 2012-03-01 Sharp Laboratories Of America, Inc. Stereoscopic dynamic range image sequence
FR2996034A1 (en) * 2012-09-24 2014-03-28 Jacques Joffre Method for generating high dynamic range image representing scene in e.g. digital still camera, involves generating composite images by superposition of obtained images, and generating high dynamic range image using composite images
US20140375318A1 (en) * 2012-01-20 2014-12-25 General Hospital Corporation System and method for field map estimation
US20170150028A1 (en) * 2015-11-19 2017-05-25 Google Inc. Generating High-Dynamic Range Images Using Varying Exposures

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996010237A1 (en) * 1994-09-20 1996-04-04 Neopath, Inc. Biological specimen analysis system processing integrity checking apparatus
US5760829A (en) * 1995-06-06 1998-06-02 United Parcel Service Of America, Inc. Method and apparatus for evaluating an imaging device
US6066949A (en) * 1997-11-19 2000-05-23 The Board Of Trustees Of The Leland Stanford Junior University Gradient characterization using fourier-transform
US6579238B1 (en) * 2000-04-24 2003-06-17 Acuson Corporation Medical ultrasonic imaging system with adaptive multi-dimensional back-end mapping
CN101099317B (en) * 2004-11-16 2015-06-10 高通股份有限公司 Open-loop rate control for a tdd communication system
US7623683B2 (en) * 2006-04-13 2009-11-24 Hewlett-Packard Development Company, L.P. Combining multiple exposure images to increase dynamic range
US20090002530A1 (en) * 2007-06-27 2009-01-01 Texas Instruments Incorporated Apparatus and method for processing images
US8073234B2 (en) * 2007-08-27 2011-12-06 Acushnet Company Method and apparatus for inspecting objects using multiple images having varying optical properties
US8934034B2 (en) * 2008-03-28 2015-01-13 The Trustees Of Columbia University In The City Of New York Generalized assorted pixel camera systems and methods
CN101915907A (en) * 2010-07-07 2010-12-15 重庆大学 Pulse radar echo signal generator and signal generating method thereof
US8867782B2 (en) * 2012-06-19 2014-10-21 Eastman Kodak Company Spectral edge marking for steganography or watermarking
US9639915B2 (en) * 2012-08-08 2017-05-02 Samsung Electronics Co., Ltd. Image processing method and apparatus
EP3869797B1 (en) * 2012-08-21 2023-07-19 Adeia Imaging LLC Method for depth detection in images captured using array cameras
US8780210B1 (en) * 2013-02-01 2014-07-15 Videoq, Inc. Video quality analyzer
JP2014194706A (en) * 2013-03-29 2014-10-09 Sony Corp Image processor, image processing method and program
US9639935B1 (en) * 2016-05-25 2017-05-02 Gopro, Inc. Apparatus and methods for camera alignment model calibration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050474A1 (en) * 2009-01-19 2012-03-01 Sharp Laboratories Of America, Inc. Stereoscopic dynamic range image sequence
US20140375318A1 (en) * 2012-01-20 2014-12-25 General Hospital Corporation System and method for field map estimation
FR2996034A1 (en) * 2012-09-24 2014-03-28 Jacques Joffre Method for generating high dynamic range image representing scene in e.g. digital still camera, involves generating composite images by superposition of obtained images, and generating high dynamic range image using composite images
US20170150028A1 (en) * 2015-11-19 2017-05-25 Google Inc. Generating High-Dynamic Range Images Using Varying Exposures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3659333A4 *

Also Published As

Publication number Publication date
US10958899B2 (en) 2021-03-23
EP3659333A4 (en) 2021-03-10
US20200213582A1 (en) 2020-07-02
CN111213372A (en) 2020-05-29
EP3659333A1 (en) 2020-06-03
CN111213372B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
Pape et al. 3-D histogram-based segmentation and leaf detection for rosette plants
CN109997351B (en) Method and apparatus for generating high dynamic range images
CN108898132B (en) Terahertz image dangerous article identification method based on shape context description
CN113469971B (en) Image matching method, detection device and storage medium
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN116703909B (en) Intelligent detection method for production quality of power adapter
JP2009259036A (en) Image processing device, image processing method, image processing program, recording medium, and image processing system
CN116228780A (en) Silicon wafer defect detection method and system based on computer vision
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN109741370B (en) Target tracking method and device
US10958899B2 (en) Evaluation of dynamic ranges of imaging devices
CN116681677A (en) Lithium battery defect detection method, device and system
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN110738656A (en) Method for evaluating definition of certificate photos, storage medium and processor
CN116993654A (en) Camera module defect detection method, device, equipment, storage medium and product
CN115272173A (en) Tin ball defect detection method and device, computer equipment and storage medium
CN112734721B (en) Optical axis deflection angle detection method, device, equipment and medium
GB2440951A (en) Edge detection for checking component position on a circuit board
JP2004134861A (en) Resolution evaluation method, resolution evaluation program, and optical apparatus
CN111062984A (en) Method, device and equipment for measuring area of video image region and storage medium
CN113469171A (en) Method, device and medium for identifying interest area in SFR test card image
CN111598943A (en) Book in-position detection method, device and equipment based on book auxiliary reading equipment
CN115830431B (en) Neural network image preprocessing method based on light intensity analysis
CN112712499B (en) Object detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918930

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017918930

Country of ref document: EP

Effective date: 20200226