CN114757946A - Method and system for detecting linearity of camera - Google Patents

Method and system for detecting linearity of camera Download PDF

Info

Publication number
CN114757946A
CN114757946A CN202210662730.2A CN202210662730A CN114757946A CN 114757946 A CN114757946 A CN 114757946A CN 202210662730 A CN202210662730 A CN 202210662730A CN 114757946 A CN114757946 A CN 114757946A
Authority
CN
China
Prior art keywords
gray
gray level
value
scene
linearity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210662730.2A
Other languages
Chinese (zh)
Inventor
杨敏
李群
吕祥
陈武
帅敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Original Assignee
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Jingce Electronic Group Co Ltd, Wuhan Jingli Electronic Technology Co Ltd filed Critical Wuhan Jingce Electronic Group Co Ltd
Priority to CN202210662730.2A priority Critical patent/CN114757946A/en
Publication of CN114757946A publication Critical patent/CN114757946A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The application relates to a method for detecting linearity of a camera, which comprises the following steps: carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images; respectively obtaining the gray level average value of the average value image under each gray level scene; fitting a gray level calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene; calculating gray scale calculation values corresponding to the gray scale scenes based on the gray scale calculation function and the gray scale scenes; dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene; and comparing each linearity with a preset value, and if the linearity is smaller than the preset value, the linearity of the camera is qualified. The linearity of the camera can be detected.

Description

Method and system for detecting linearity of camera
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and a system for detecting linearity of a camera.
Background
The imaging process of the pixel of the industrial camera comprises the following steps: the sensor generates electrons under the irradiation of incident light (i.e., photoelectric conversion occurs), the electrons are converted into signal charges, the signal charges are converted into output voltages, and the output voltages and the final imaging pixel values (which correspond to pixels) are in a linear relationship. It will be appreciated that in the imaging process described above, in addition to the photoelectric conversion being a linear process, other processes are possible that are non-linear processes. This results in a non-linear relationship between the imaging pixel value (the gray-scale value of the pixel) of the industrial camera and the incident light (light source) brightness when the exposure time of the industrial camera is fixed. The linearity is an important index for describing the static characteristics of the sensor, and the linearity of the camera has an important influence on the quality of the acquired image and even the detection effect.
Therefore, how to detect the linearity of the camera to determine whether the camera is qualified is a difficult problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method and a system for detecting the linearity of a camera, which can detect the linearity of the camera.
In a first aspect, a method for detecting linearity of a camera is provided, which includes the following steps:
Carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images;
respectively obtaining the gray level average value of the average value image under each gray level scene;
fitting a gray level calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene;
calculating gray scale calculation values corresponding to the gray scale scenes based on the gray scale calculation function and the gray scale scenes;
dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene;
and comparing each linearity with a preset value, and if the linearity is smaller than the preset value, the linearity of the camera is qualified.
In some embodiments, the method of calculating further comprises the step of adjusting the different gray scenes, the step of adjusting the different gray scenes comprising:
keeping the illumination intensity constant, and adjusting different exposure times to obtain different gray level scenes.
In some embodiments, fitting a gray level calculation function based on the gray level mean of each mean image and each gray level scene by using a least square method comprises the following steps:
Correlating the gray level average value of the average value image with the exposure time of the corresponding gray level scene to obtain a fitting point;
and fitting each fitting point by using a least square method to obtain a gray scale calculation function of the gray scale value relative to the exposure time.
In some embodiments, calculating a gray scale calculation value corresponding to each gray scale scene based on the gray scale calculation function and each gray scale scene includes the following steps:
and substituting the exposure time corresponding to each gray scene into a gray calculation function to obtain a gray calculation value corresponding to each gray scene.
In some embodiments, the method of calculating further comprises the step of adjusting the different gray scenes, the step of adjusting the different gray scenes comprising:
keeping the exposure time constant, and adjusting different illumination intensities to obtain different gray level scenes.
In some embodiments, fitting a gray level calculation function based on the gray level mean of each mean image and each gray level scene by using a least square method comprises:
correlating the gray level average value of the average value image with the illumination intensity of the corresponding gray level scene to obtain a fitting point;
and fitting each fitting point by using a least square method to obtain a gray calculation function of the gray value with respect to the illumination intensity.
In some embodiments, calculating gray scale calculation values corresponding to each gray scale scene based on the gray scale calculation function and each gray scale scene comprises:
and substituting the illumination intensity corresponding to each gray scene into the gray calculation function to obtain the gray calculation value corresponding to each gray scene.
In some embodiments, the step of respectively obtaining the grayscale mean of the mean image in each grayscale scene includes the following steps:
and dividing the part of the mean image, which is positioned in the central area, into a bright area, acquiring the gray average value of all pixel points in the bright area, and taking the gray average value as the gray average value of the mean image.
In some embodiments, the step of respectively obtaining the grayscale mean of the mean image in each grayscale scene includes the following steps:
comparing the gray value of each pixel point in the mean image with a gray threshold value, and screening out the pixel points with the gray values larger than the gray threshold value as good points;
dividing the area where all the good points are located into bright areas as a whole;
and obtaining the gray average value of all pixel points in the bright area, and taking the gray average value as the gray average value of the average image.
In a second aspect, a system for detecting linearity of a camera is provided, which includes:
a first module to: carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images; respectively obtaining the gray level average value of the average value image under each gray level scene;
A second module to: fitting a gray level calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene;
a third module to: calculating gray scale calculation values corresponding to the gray scale scenes on the basis of the gray scale calculation function and the gray scale scenes;
a fourth module to: dividing the gray level difference absolute value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference is the gray level mean value of the mean value image in each gray level scene minus the corresponding gray level calculated value;
a fifth module to: and comparing each linearity with a preset value, and if the linearity is less than the preset value, determining that the camera linearity is qualified.
The technical scheme who provides this application brings beneficial effect includes:
the embodiment of the application provides a method and a system for detecting linearity of a camera, wherein a gray mean value is obtained by carrying out averaging processing on images shot under a plurality of different gray scenes, and a gray calculation function is fitted by combining the gray scenes; the gray calculation values under different gray scenes are calculated by utilizing the gray calculation function, and the linearity can be calculated by combining the gray calculation values and the gray mean value, so that the linearity of the camera can be detected to judge the qualified condition of the camera.
The image shot by the method is not only applied to fitting of the gray scale calculation function, but also applied to calculation of the linearity, and the image does not need to be shot again so as to be combined with the gray scale calculation function to calculate the linearity, so that the method reduces operation steps and improves detection efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting linearity of a camera according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a system for detecting linearity of a camera according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
Referring to fig. 1, an embodiment of the present application provides a method for detecting linearity of a camera, which includes the following steps:
101: and respectively carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes to obtain a plurality of mean value images.
It should be noted that when the camera is used to capture images, 10 images may be taken as a group, or more or less images may be taken as a group, and the specific number may be determined according to actual needs.
The camera is divided into a black-and-white camera and a color camera, and the shot images are also distinguished from a black-and-white image and a color image.
When a black-and-white camera is used for shooting, the obtained image is a black-and-white image, and when a color camera is used for shooting, the obtained image is a color image.
The principle of the image equalization processing method is the same regardless of whether the image is a black-and-white image or a color image.
If a black and white image has 10 x 10 pixels, there are 15 images in a group.
Because the pixel point of the black-white image has one channel, namely the Y channel. When the image equalization processing is performed, the specific method is as follows:
adding Y-channel pixel values of pixel points with coordinates (1, 1) in 15 images to calculate an average value, and taking the average value as the Y-channel pixel value of the pixel point with coordinates (1, 1) on the average image; adding Y-channel pixel values of pixel points with coordinates (1, 2) in the 15 images to calculate an average value, and taking the average value as the Y-channel pixel value of the pixel point with coordinates (1, 2) on the average image; and so on, the black and white mean image can be obtained.
If a color image has 10 x 10 pixels, there are 15 images in a group.
Because the pixel points of the color image have three channels, namely an R channel, a G channel and a B channel. When the image equalization processing is performed, the specific method is as follows:
adding R channel pixel values of pixel points with coordinates of (1, 1) in 15 images to calculate a mean value, and taking the mean value as the R channel pixel value of the pixel point with coordinates of (1, 1) on the mean image; adding G channel pixel values of pixel points with coordinates of (1, 1) in the 15 images to calculate an average value, and taking the average value as the G channel pixel value of the pixel point with coordinates of (1, 1) on the average image; finally, adding the B channel pixel values of the pixel points with the coordinates of (1, 1) in the 15 images to calculate an average value, and taking the average value as the B channel pixel value of the pixel point with the coordinates of (1, 1) on the average image; adding R channel pixel values of pixel points with coordinates (1, 2) in 15 images to calculate a mean value, and taking the mean value as the R channel pixel value of the pixel point with coordinates (1, 2) on the mean value image; adding the G channel pixel values of the pixel points with the coordinates of (1, 2) in the 15 images to calculate an average value, and taking the average value as the G channel pixel value of the pixel point with the coordinates of (1, 2) on the average image; finally, adding the B channel pixel values of the pixel points with the coordinates of (1, 2) in the 15 images to calculate an average value, and taking the average value as the B channel pixel value of the pixel point with the coordinates of (1, 2) on the average image; and so on, obtaining the color mean image.
In the following steps, the gray-scale value is used, and the gray-scale processing is not required for the black-and-white average value image, but the gray-scale processing is required for the color average value image, and the method of the gray-scale processing may be a floating point algorithm, an integer method, a shift method, an average value method, or a method of only taking green.
The gray scenes are distinguished by gray percentages which can be adjusted according to actual needs.
As an example, the illumination intensity may be kept constant by adjusting different exposure times to obtain a gray scene with different gray percentages.
As an example, the exposure time may be kept constant by adjusting different illumination intensities to obtain different gray scale percentage gray scale scenes.
102: and respectively obtaining the gray level average value of the average value image in each gray level scene.
This step 102, can be calculated in a variety of ways.
As an example, the gray values of all the pixels in the mean image may be summed, and the mean value may be calculated to obtain the gray mean value.
Due to the lens, actually, gray values of each pixel point in an image often have large difference, and the situation that the middle brightness is high and the edge brightness is low appears, and the brightness difference is sometimes large. In order to reduce errors. A portion of the pixels in the mean image may be selected to calculate the gray-scale mean.
Specifically, as an example, in one mode, a part of the mean image located in the central area is divided into a bright area, and a grayscale average value of all pixel points in the bright area is obtained and used as the grayscale average value of the mean image. A part of the mean image located in the central area is directly selected as a bright area, and an area with low edge brightness is avoided, and the size of the selected part can be determined according to actual needs, for example, the size of 20 × 20 pixels can be selected, or larger or smaller.
As an example, in another mode, comparing the gray value of each pixel point in the mean image with a gray threshold, and screening out the pixel points with the gray values larger than the gray threshold as good points; dividing the area where all the good points are located into bright areas as a whole; and obtaining the gray average value of all pixel points in the bright area, and taking the gray average value as the gray average value of the average value image. The size of the gray threshold can be determined according to actual needs.
In addition, the sizes of the bright areas in the mean images can be the same or different, and certainly, the bright areas are preferably the same in order to ensure that the result is more accurate and more consistent with the actual situation; and the positions of the bright areas in the mean images can be overlapped or not overlapped, and the overlapping is best in order to ensure that the result is more accurate and more in line with the actual situation.
103: and fitting a gray degree calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene.
The manner of fitting the gray scale calculation function is different according to the different manner of adjusting the gray scale scene.
When using a way to adjust different exposure times by keeping the illumination intensity constant to obtain different gray scale percentages of the gray scale scene. At this time, fitting a gray level calculation function by adopting the following steps:
201: and associating the gray level average value of the average value image with the exposure time of the corresponding gray level scene to obtain a fitting point.
For example, let t be the exposure time of the first gray-scale scene1The gray level mean value of the corresponding first mean value image is A1The first fitting point obtained is (t)1,A1) (ii) a The exposure time of the second gray scale scene is t2The mean value of the gray scale of the corresponding second mean value image is A2The second fitting point obtained is (t)2,A2) (ii) a The exposure time of the second gray scale scene is t3The mean value of the gray scale of the corresponding second mean value image is A3The second fitting point obtained is (t)3,A3) (ii) a And so on.
202: and fitting each fitting point by using a least square method to obtain a gray scale calculation function of the gray scale value relative to the exposure time.
And fitting the fitting points to obtain a gray calculation function y = kx + b, wherein an independent variable x is exposure time, a dependent variable y is a gray value, k is a coefficient, and b is a constant.
When the method is used, different illumination intensities are adjusted by keeping the exposure time constant so as to obtain gray scenes with different gray percentages. At this time, fitting a gray level calculation function by adopting the following steps:
301: and correlating the gray level average value of the average value image with the illumination intensity of the corresponding gray level scene to obtain a fitting point.
For example, let us say the illumination intensity of the first gray scene is I1The gray level mean value of the corresponding first mean value image is A1The first fitting point obtained is (I)1,A1) (ii) a The second gray scale scene has an illumination intensity of I2The mean value of the gray scale of the corresponding second mean value image is A2The second fitting point obtained is (I)2,A2) (ii) a The second gray scale scene has an illumination intensity of I3The mean value of the gray scale of the corresponding second mean value image is A3The second fitting point obtained is (I)3,A3) (ii) a And so on.
302: and fitting each fitting point by using a least square method to obtain a gray calculation function of the gray value about the illumination intensity.
Fitting the fitting points to obtain a gray calculation function y = k 'x + b', wherein an independent variable x is the illumination intensity, a dependent variable y is the gray value, k 'is a coefficient, and b' is a constant.
104: and calculating gray scale calculation values corresponding to the gray scale scenes on the basis of the gray scale calculation function and the gray scale scenes.
When using a way to adjust different exposure times by keeping the illumination intensity constant to obtain different gray scale percentages of the gray scale scene. And substituting the exposure time corresponding to each gray scene into a gray calculation function y = kx + b to obtain a gray calculation value corresponding to each gray scene.
For example, the exposure time t of the first gray scene1Substituting the gray scale value into y = kx + b to obtain a gray scale calculation value A corresponding to the first gray scale scene through calculation1' of a compound of formula I; exposing time t of the second gray scene2Substituting the gray scale value into y = kx + b to obtain a gray scale calculation value A corresponding to the second gray scale scene2' of a compound of formula I; exposure time t of the third gray scene3Substituting the gray scale value A into y = kx + b to obtain a gray scale calculation value A corresponding to a third gray scale scene3' of a compound of formula I; and so on.
When the method is used, different illumination intensities are adjusted by keeping the exposure time constant so as to obtain gray scenes with different gray percentages. And substituting the illumination intensity corresponding to each gray scene into the gray calculation function y = k 'x + b' to obtain the gray calculation value corresponding to each gray scene.
For example, the illumination intensity I of the first gray scene1Substituting y = k 'x + b', and calculating to obtain a gray scale calculation value A corresponding to the first gray scale scene1'; the illumination intensity I of the second gray scale scene2Substituting y = k 'x + b', calculating gray scale calculation value A corresponding to the second gray scale scene2'; the illumination intensity I of the third gray scene3Substituting y = k 'x + b', and calculating to obtain a gray scale calculation value A corresponding to the third gray scale scene3'; and so on.
105: and dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene.
For example, when a method is used in which different exposure times are adjusted to obtain gray scenes with different gray percentages by keeping the illumination intensity constant.
Linearity = | a for first gray scene1-A1'|/A1'。
Linearity = | a for second gray scene2-A2'|/A2'。
Linearity = | a for third gray scale scene3-A3'|/A3'。
And so on.
As another example, when a method is used, in which different illumination intensities are adjusted by keeping the exposure time constant, so as to obtain gray scenes with different gray percentages.
Linearity = | a for first gray scene1-A1''|/A1''。
Linearity = | a for second gray scene2-A2''|/A2''。
Linearity = | a for third gray scale scene3-A3''|/A3''。
And so on.
106: and comparing each linearity with a preset value, wherein if the linearity is smaller than the preset value, the linearity of the camera is qualified, otherwise, the linearity of the camera is unqualified.
The preset value can be set according to actual needs.
Based on the above detection method, referring to fig. 2, an embodiment of the present application further provides a detection system for camera linearity, which includes a first module, a second module, a third module, a fourth module, and a fifth module. Wherein:
the first module is to: carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images; and respectively obtaining the gray level average value of the average value image in each gray level scene.
The second module is to: and fitting a gray degree calculation function by using a least square method based on the gray degree average value of each average value image and each gray degree scene.
A third module is to: and calculating gray scale calculation values corresponding to the gray scale scenes on the basis of the gray scale calculation function and the gray scale scenes.
The fourth module is to: and dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene.
A fifth module is to: and comparing each linearity with a preset value, and if the linearity is smaller than the preset value, the linearity of the camera is qualified.
In the description of the present application, it should be noted that the terms "upper", "lower", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are only for convenience in describing the present application and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and operate, and thus, should not be construed as limiting the present application. Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "connected" are intended to be inclusive and mean, for example, that they may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It is noted that, in the present application, relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description is only an example of the present application, and is provided to enable any person skilled in the art to understand or implement the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for detecting linearity of a camera is characterized by comprising the following steps:
carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images;
respectively obtaining the gray level average value of the average value image under each gray level scene;
fitting a gray level calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene;
calculating gray scale calculation values corresponding to the gray scale scenes based on the gray scale calculation function and the gray scale scenes;
dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene;
And comparing each linearity with a preset value, and if the linearity is less than the preset value, determining that the camera linearity is qualified.
2. The method for detecting linearity of a camera according to claim 1, wherein the calculating method further comprises a step of adjusting different gray scenes, and the step of adjusting different gray scenes comprises:
keeping the illumination intensity constant, and adjusting different exposure times to obtain different gray level scenes.
3. The method for detecting the linearity of the camera according to claim 2, wherein a gray level calculation function is fitted based on the gray level mean value of each mean image and each gray level scene by using a least square method, comprising the following steps:
correlating the gray level average value of the average value image with the exposure time of the corresponding gray level scene to obtain a fitting point;
and fitting each fitting point by using a least square method to obtain a gray calculation function of the gray value with respect to the exposure time.
4. The method for detecting linearity of a camera according to claim 3, wherein:
based on the gray scale calculation function and each gray scale scene, calculating gray scale calculation values corresponding to each gray scale scene, and the method comprises the following steps of:
and substituting the exposure time corresponding to each gray scene into a gray calculation function to obtain a gray calculation value corresponding to each gray scene.
5. The method for detecting linearity of a camera according to claim 1, wherein the calculating method further comprises a step of adjusting different gray scenes, and the step of adjusting different gray scenes comprises:
keeping the exposure time constant, and adjusting different illumination intensities to obtain different gray level scenes.
6. The method for detecting linearity of a camera according to claim 5, wherein fitting a gray level calculation function based on the gray level mean value of each mean value image and each gray level scene by using a least square method comprises:
correlating the gray level average value of the average value image with the illumination intensity of the corresponding gray level scene to obtain a fitting point;
and fitting each fitting point by using a least square method to obtain a gray calculation function of the gray value with respect to the illumination intensity.
7. The method for detecting linearity of a camera according to claim 6, wherein calculating a gray scale calculation value corresponding to each gray scale scene based on the gray scale calculation function and each gray scale scene comprises:
and substituting the illumination intensity corresponding to each gray scene into a gray calculation function to obtain a gray calculation value corresponding to each gray scene.
8. The method for detecting the linearity of the camera according to claim 1, wherein the step of respectively obtaining the gray level average of the average image in each gray level scene comprises the following steps:
and dividing the part of the mean image, which is positioned in the central area, into bright areas, acquiring the gray average value of all pixel points in the bright areas, and taking the gray average value as the gray average value of the mean image.
9. The method for detecting the linearity of the camera according to claim 1, wherein the step of obtaining the gray level mean value of the mean value image in each gray level scene comprises the following steps:
comparing the gray value of each pixel point in the mean image with a gray threshold value, and screening out the pixel points with the gray values larger than the gray threshold value as good points;
dividing the area where all the good points are located into bright areas as a whole;
and obtaining the gray average value of all pixel points in the bright area, and taking the gray average value as the gray average value of the average image.
10. A system for detecting linearity of a camera, comprising:
a first module to: carrying out image equalization processing on a plurality of groups of images shot under different gray level scenes respectively to obtain a plurality of mean value images; respectively obtaining the gray level average value of the average value image under each gray level scene;
A second module to: fitting a gray level calculation function by using a least square method based on the gray level average value of each average value image and each gray level scene;
a third module to: calculating gray scale calculation values corresponding to the gray scale scenes based on the gray scale calculation function and the gray scale scenes;
a fourth module to: dividing the absolute value of the gray level difference value of each gray level scene with the corresponding gray level calculated value to obtain the linearity corresponding to each gray level scene, wherein the gray level difference value is obtained by subtracting the corresponding gray level calculated value from the gray level mean value of the mean value image in each gray level scene;
a fifth module to: and comparing each linearity with a preset value, and if the linearity is smaller than the preset value, the linearity of the camera is qualified.
CN202210662730.2A 2022-06-13 2022-06-13 Method and system for detecting linearity of camera Pending CN114757946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662730.2A CN114757946A (en) 2022-06-13 2022-06-13 Method and system for detecting linearity of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662730.2A CN114757946A (en) 2022-06-13 2022-06-13 Method and system for detecting linearity of camera

Publications (1)

Publication Number Publication Date
CN114757946A true CN114757946A (en) 2022-07-15

Family

ID=82336763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662730.2A Pending CN114757946A (en) 2022-06-13 2022-06-13 Method and system for detecting linearity of camera

Country Status (1)

Country Link
CN (1) CN114757946A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007293686A (en) * 2006-04-26 2007-11-08 Konica Minolta Photo Imaging Inc Imaging apparatus, image processing apparatus, image processing method and image processing program
CN106973240A (en) * 2017-03-23 2017-07-21 宁波诺丁汉大学 Realize the digital camera imaging method that high dynamic range images high definition is shown
CN107403177A (en) * 2017-05-27 2017-11-28 延锋伟世通汽车电子有限公司 Brightness measurement method based on industrial camera
CN113873222A (en) * 2021-08-30 2021-12-31 卡莱特云科技股份有限公司 Industrial camera linearity correction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007293686A (en) * 2006-04-26 2007-11-08 Konica Minolta Photo Imaging Inc Imaging apparatus, image processing apparatus, image processing method and image processing program
CN106973240A (en) * 2017-03-23 2017-07-21 宁波诺丁汉大学 Realize the digital camera imaging method that high dynamic range images high definition is shown
CN107403177A (en) * 2017-05-27 2017-11-28 延锋伟世通汽车电子有限公司 Brightness measurement method based on industrial camera
CN113873222A (en) * 2021-08-30 2021-12-31 卡莱特云科技股份有限公司 Industrial camera linearity correction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张宪民,陈忠主: "《机械工程概论》", 30 November 2018 *

Similar Documents

Publication Publication Date Title
US7535511B2 (en) Automatic exposure control method and automatic exposure compensation apparatus
US8098304B2 (en) Dynamic identification and correction of defective pixels
US7085430B2 (en) Correcting geometric distortion in a digitally captured image
US8879869B2 (en) Image defect map creation using batches of digital images
EP1528797B1 (en) Image processing apparatus, image-taking system and image processing method
US7315658B2 (en) Digital camera
US20150317774A1 (en) Auto-focus image system
CN102577355B (en) The method of the defect of predicted picture acquisition system and related system thereof
CN100515041C (en) Method for automatically controlling exposure and device for automatically compensating exposure
WO2009147821A1 (en) Resin material detection testing device and memory recording medium
WO2010032409A1 (en) Image processing device, imaging device, evaluation device, image processing method, and optical system evaluation method
WO2007095483A2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
CN102724405A (en) Method and device for automatic exposure compensation of backlit scenes in video imaging system
CN114757853B (en) Method and system for acquiring flat field correction function and flat field correction method and system
US8717465B2 (en) Blemish detection sytem and method
CN111638042A (en) DLP optical characteristic test analysis method
CN114757946A (en) Method and system for detecting linearity of camera
JP2005309651A (en) Shading processor and shading processing method for imaging element and imaging device
US7636494B2 (en) Method and apparatus for correcting pixel image signal intensity
US20100254624A1 (en) Method of correcting image distortion
JP2011114760A (en) Method for inspecting camera module
JP2009053019A (en) Chart, system and method for testing resolution
JP6561479B2 (en) Imaging device capable of color shading correction
Rodricks et al. First principles' imaging performance evaluation of CCD-and CMOS-based digital camera systems
CN114283170B (en) Light spot extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220715