CN114205580A - Method and device for evaluating anti-shake effect of camera module - Google Patents

Method and device for evaluating anti-shake effect of camera module Download PDF

Info

Publication number
CN114205580A
CN114205580A CN202111494512.4A CN202111494512A CN114205580A CN 114205580 A CN114205580 A CN 114205580A CN 202111494512 A CN202111494512 A CN 202111494512A CN 114205580 A CN114205580 A CN 114205580A
Authority
CN
China
Prior art keywords
image
length
test target
camera module
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111494512.4A
Other languages
Chinese (zh)
Inventor
陈志恒
吴江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Awinic Technology Co Ltd
Original Assignee
Shanghai Awinic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Awinic Technology Co Ltd filed Critical Shanghai Awinic Technology Co Ltd
Priority to CN202111494512.4A priority Critical patent/CN114205580A/en
Publication of CN114205580A publication Critical patent/CN114205580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method and a device for evaluating the anti-shake effect of a camera module, which comprises the steps of firstly, acquiring images shot by the camera module on a test target in a static state, in a shake state along a first direction and when an anti-shake function is started, and in the shake state along the first direction and when the anti-shake function is closed in a darkroom environment; then, carrying out binarization processing on the three images respectively, and determining image areas of test targets in the three images; secondly, respectively calculating the lengths of the image areas of the test target in the three images along the first direction; finally, based on the length of the image area of the test target in the three images along the first direction, the shake suppression ratio of the camera module is obtained and is used for evaluating the anti-shake effect of the camera module.

Description

Method and device for evaluating anti-shake effect of camera module
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for evaluating an anti-shake effect of a camera module.
Background
At present, smart phones are widely popularized and applied, a photographing function is an important function of the smart phones, subjective influence of the quality of the photographing function on users is very large, and the quality of photographing quality directly influences the market sales condition of the smart phones, so mobile phone manufacturers or camera module manufacturers continuously update and perfect the photographing quality of camera modules, wherein the anti-shake function of the camera modules is a necessary premise for photographing high-quality photos. Before the camera module or the smart phone leaves the factory, the camera module is generally required to be evaluated for the anti-shake effect, and the quality of the anti-shake effect evaluation scheme directly influences the production line quality management and quality control of various large mobile phone manufacturers and camera module manufacturers.
When the anti-shake effect of the camera module is evaluated in the prior art, the anti-shake effect is usually judged based on the artificial visual perception, but the anti-shake effect of the camera module is judged based on the artificial visual perception, which lacks the objective evaluation basis, so that the evaluation result is not accurate enough, and the time and the labor are consumed, therefore, an objective, accurate, simple and effective evaluation method and device for the anti-shake effect of the camera module are urgently needed.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for evaluating an anti-shake effect of a camera module, so as to provide an objective, accurate, simple and effective method and apparatus for evaluating an anti-shake effect of a camera module.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
a method for evaluating the anti-shake effect of a camera module comprises the following steps:
acquiring a first image, a second image and a third image of a test target shot by a camera module in a darkroom environment, wherein the first image is an image shot by the camera module in a static state, and the second image and the third image are respectively images shot by the camera module in a state of shaking along a first direction when an anti-shaking function of the camera module is turned on and turned off;
respectively carrying out binarization processing on the first image, the second image and the third image, and determining image areas of test targets in the first image, the second image and the third image;
respectively calculating the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the lengths as a first length, a second length and a third length;
and obtaining the shake suppression ratio of the camera module based on the first length, the second length and the third length, and evaluating the anti-shake effect of the camera module.
Optionally, an included angle exists between the direction of the test target image in the first image, the second image, and the third image and the direction of the actual test target, and before performing binarization processing on the first image, the second image, and the third image, the method further includes:
and respectively rotating the first image, the second image and the third image to enable the direction of the test target image in the first image, the second image and the third image to be consistent with the direction of an actual test target.
Optionally, respectively rotating the first image, the second image, and the third image so that the direction of the test target image in the first image, the second image, and the third image is consistent with the direction of the actual test target includes:
respectively setting a pixel point in the first image, the second image and the third image as a rotating shaft, and taking the pixel point as a reference point to obtain position vectors of other pixel points except the reference point in the first image, the second image and the third image;
setting a rotation matrix based on an included angle between the direction of the test target image in the first image, the second image and the third image and the direction of an actual test target;
multiplying the position vectors of other pixel points except the reference point in the first image, the second image and the third image by the rotation matrix respectively to obtain the rotated position vectors of other pixel points except the reference point in the first image, the second image and the third image;
obtaining the positions of the pixel points of the first image, the second image and the third image after rotation based on the rotated position vectors of the pixel points except the reference point in the first image, the second image and the third image and the positions of the reference point;
assigning the pixel values corresponding to the pixel point positions of the first image, the second image and the third image before rotation to the pixel values corresponding to the pixel point positions of the first image, the second image and the third image after rotation, and completing the rotation of the first image, the second image and the third image so that the direction of the test target image in the first image, the second image and the third image is consistent with the direction of an actual test target.
Optionally, performing binarization processing on the first image, the second image, and the third image, respectively, and determining image areas of test targets in the first image, the second image, and the third image includes:
performing graying processing on the first image, the second image and the third image respectively to obtain a first grayscale image, a second grayscale image and a third grayscale image in sequence;
respectively carrying out binarization processing on the first gray level image, the second gray level image and the third gray level image so as to divide each pixel point in the first gray level image, the second gray level image and the third gray level image into a first type pixel point and a second type pixel point, wherein the pixel value of the first type pixel point is set to be 255, and the pixel value of the second type pixel point is set to be 0;
and based on the positions of all pixel points in the first image, the second image and the third image, selecting the maximum connected region of the first type pixel points in the first image, the second image and the third image, and determining the selected maximum connected region as the image region of the test target in the first image, the second image and the third image.
Optionally, the binarizing processing is performed on the first gray image, the second gray image and the third gray image respectively, so as to divide each pixel point in the first gray image, the second gray image and the third gray image into a first type of pixel point and a second type of pixel point, and the binarizing processing includes:
selecting at least two clustering centers for clustering by taking the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image as clustering data, and dividing the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image into a plurality of clustering clusters;
and integrating all cluster clusters containing pixel values larger than a preset threshold value into a first class cluster, and integrating the rest cluster clusters into a second class cluster, wherein pixel points corresponding to the pixel values in the first class cluster belong to first class pixel points, and pixel points corresponding to the pixel values in the second class cluster belong to second class pixel points.
Optionally, calculating lengths of the image areas of the test target in the first image, the second image, and the third image along the first direction, and sequentially setting the lengths as the first length, the second length, and the third length includes:
and respectively calculating the number of pixel points of the image areas of the test target in the first direction in the first image, the second image and the third image, taking the pixel points as the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the pixel points as the first length, the second length and the third length.
Optionally, the respectively calculating the number of pixel points of the image areas of the test target in the first direction in the first image, the second image, and the third image includes:
and respectively calculating the number of pixel points in the longest distance of the image areas of the test target in the first image, the second image and the third image along the first direction, and taking the number as the number of the pixel points in the first direction of the image areas of the test target in the first image, the second image and the third image.
Optionally, the respectively calculating the number of pixel points of the image areas of the test target in the first direction in the first image, the second image, and the third image includes:
and respectively calculating the number of pixel points included in the image areas of the test target in the first image, the second image and the third image in a unit distance in a second direction, wherein the number of the pixel points is used as the number of the pixel points of the image areas of the test target in the first image, the second image and the third image along the first direction, and the second direction is perpendicular to the first direction.
Optionally, obtaining the jitter suppression ratio of the camera module based on the first length, the second length, and the third length includes:
obtaining a ratio of a difference between the third length and the first length to a difference between the second length and the first length based on the first length, the second length, and the third length;
and obtaining the jitter suppression ratio of the camera module based on the ratio of the difference between the third length and the first length to the difference between the second length and the first length.
An evaluation device for camera module anti-shake effect, comprising:
the camera module comprises an image acquisition unit, a processing unit and a control unit, wherein the image acquisition unit is used for acquiring a first image, a second image and a third image which are shot by the camera module on a test target in a darkroom environment, the first image is shot by the camera module on the test target in a static state, and the second image and the third image are respectively shot by the camera module on and off of an anti-shake function of the camera module in a shake state along a first direction;
an image processing unit, which respectively performs binarization processing on the first image, the second image and the third image, and determines image areas of test targets in the first image, the second image and the third image;
a calculating unit, which calculates lengths of image areas of the test target in the first image, the second image and the third image along the first direction, and sets the lengths as a first length, a second length and a third length in sequence;
and the processing unit is used for obtaining the jitter suppression ratio of the camera module based on the first length, the second length and the third length and evaluating the anti-jitter effect of the camera module.
Optionally, if an included angle exists between the direction of the test target image in the first image, the second image, and the third image and the direction of the actual test target, the apparatus is between the image acquiring unit and the image processing unit, and further includes:
and the image correction unit is used for respectively rotating the first image, the second image and the third image so that the direction of a test target image in the first image, the second image and the third image is consistent with the direction of an actual test target.
Compared with the prior art, the technical scheme has the following advantages:
the evaluation method for the anti-shake effect of the camera module provided by the embodiment of the application comprises the following steps: acquiring a first image, a second image and a third image of a test target shot by a camera module in a darkroom environment, wherein the first image is an image shot by the camera module in a static state, and the second image and the third image are respectively images shot by the camera module in a state of shaking along a first direction when an anti-shaking function of the camera module is turned on and turned off; respectively carrying out binarization processing on the first image, the second image and the third image, and determining image areas of test targets in the first image, the second image and the third image; respectively calculating the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the lengths as a first length, a second length and a third length; and obtaining the shake suppression ratio of the camera module based on the first length, the second length and the third length, and evaluating the anti-shake effect of the camera module. Therefore, the method obtains the shake suppression ratio of the camera module by performing binarization processing on the first image, the second image and the third image and comparing the lengths of the image areas of the test target in the first image, the second image and the third image along the shake direction of the camera module, so as to evaluate the anti-shake effect of the camera module.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a method for evaluating an anti-shake effect of a camera module according to an embodiment of the present application;
FIG. 2 is an image of a central cross test target taken by the camera module;
fig. 3 is a schematic flowchart of a method for evaluating an anti-shake effect of a camera module according to another embodiment of the present application;
fig. 4(a) is an image of a central cross test target photographed by the camera module at an arrangement position of 45 degrees;
FIGS. 4(b) and 4(c) are images of the image of FIG. 4(a) rotated 45 degrees counterclockwise;
fig. 5(a) is an image of a circular test target photographed by a camera module in a still state;
fig. 5(b) is an image of the circular test target photographed by the camera module in a state of shaking in the first direction and with its anti-shake function turned off;
fig. 6(a) is a schematic diagram of an image area where a rectangular frame area surrounds a central cross test target in an image shot by a camera module for the central cross test target;
fig. 6(b) is a schematic diagram of the rectangular frame region shown in fig. 6(a) being cut into 3 sub-rectangular frame regions on average in the second direction;
fig. 7 is a schematic structural diagram of an apparatus for evaluating an anti-shake effect of a camera module according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an apparatus for evaluating an anti-shake effect of a camera module according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein, and it will be apparent to those of ordinary skill in the art that the present application is not limited to the specific embodiments disclosed below.
Next, the present application will be described in detail with reference to the drawings, and in the detailed description of the embodiments of the present application, the cross-sectional views illustrating the structure of the device are not enlarged partially according to the general scale for convenience of illustration, and the drawings are only examples, which should not limit the scope of the protection of the present application. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
As described in the background section, there is a need for an objective, accurate, simple and effective method and apparatus for evaluating the anti-shake effect of a camera module.
In view of this, an embodiment of the present application provides a method for evaluating an anti-shake effect of a camera module, as shown in fig. 1, the method includes:
s100: the method comprises the steps of obtaining a first image, a second image and a third image of a camera module shooting a test target in a darkroom environment, wherein the first image is an image of the camera module shooting the test target in a static state, and the second image and the third image are images of the camera module shooting the test target when an anti-shake function of the camera module is opened and closed in a shake state along a first direction.
Specifically, in an embodiment of the present application, acquiring a first image, a second image, and a third image of a test target shot by a camera module in a darkroom environment includes:
s101: fixing the camera module on a vibration table;
s102: shooting the test target in a static state of the camera module to obtain the first image;
s103: starting the anti-shake function of the camera module under the state that the camera module is driven by the vibrating table to shake along the first direction at fixed amplitude and vibration frequency, and shooting the test target to obtain the second image;
s104: and under the state that the camera module is driven by the vibrating table along the first direction with fixed amplitude and vibration frequency to shake, closing the anti-shake function of the camera module, and shooting the test target to obtain the third image.
It should be noted that, the order of the first image, the second image, and the third image obtained by shooting with the camera module is not limited in the present application, as long as the three images are obtained.
It should be further noted that the first image is an image of the test target taken by the camera module in a static state, so that an image of the test target in the first image is substantially the same as an actual test target, that is, the first image is used as a reference standard image; the third image is an image shot by the camera module in a state that the camera module shakes along the first direction and the anti-shake function of the camera module is closed, so that the image of the test target in the third image is elongated in the first direction compared with the actual test target, and a larger virtual image exists; the second image is an image taken by the camera module in a state where the camera module is shaken along the first direction and the anti-shake function of the camera module is turned on, so that the image of the test target in the second image is also elongated in the first direction compared to the actual test target, that is, a virtual image also exists, but since the anti-shake function of the camera module is turned on, the image of the test target in the second image is less elongated in the first direction compared to the image of the test target in the third image, and the anti-shake effect of the camera module is better, and the image of the test target in the second image is closer to the image of the test target in the first image, that is, closer to the actual test target.
It should be further noted that, in order to facilitate subsequent image processing on the first image, the second image, and the third image, and obtain lengths of image areas of a test target in the first image, the second image, and the third image along the first direction, the first image, the second image, and the third image are all obtained by shooting by the camera module in a cued environment and present grayscale images, as shown in fig. 2, in the first image, the second image, and the third image, an image portion of the test target is a bright image portion, and the rest of background portions are dark image portions.
Optionally, the camera module may be a mobile phone camera module, or a camera module.
Optionally, the anti-shake function of the camera module is Optical Image Stabilization (OIS).
It can be understood that, when the camera module shoots the test target, if the placement position of the camera module is 90 degrees, that is, the camera module shoots the test target directly, the direction of the test target image in the shot image is consistent with the direction of the actual test target, but if the placement position of the camera module is not 90 degrees, at this time, an included angle exists between the direction of the test target image in the shot image and the direction of the actual test target, and correction is needed. Therefore, on the basis of the above embodiments, in an embodiment of the present application, as shown in fig. 3, an included angle exists between a direction of a test target image in the first image, the second image, and the third image and a direction of an actual test target, and before performing binarization processing on the first image, the second image, and the third image, the method further includes:
s110: and respectively rotating the first image, the second image and the third image to enable the direction of the test target image in the first image, the second image and the third image to be consistent with the direction of an actual test target.
Specifically, in an embodiment of the present application, rotating the first image, the second image, and the third image respectively so that the direction of the test target image in the first image, the second image, and the third image is consistent with the direction of the actual test target includes:
s111: and respectively setting one pixel point in the first image, the second image and the third image as a rotating shaft, and taking the pixel point as a reference point to obtain the position vectors of other pixel points except the reference point in the first image, the second image and the third image.
It should be noted that, because the first image, the second image, and the third image are images taken by the camera module in a darkroom environment for a test target, the first image, the second image, and the third image are grayscale images stored in 3 channels (i.e., three color channels of red R, green G, and blue B), only one channel of the 3-channel grayscale image has a pixel value, and the pixel values of the other two channels are all zero, that is, the first image, the second image, and the third image only occupy single-channel stored pixel values; since the image is stored in a single channel by a two-dimensional matrix, that is, only the position of each pixel in the image is known, and the essence of image rotation is to use vector rotation, when the first image, the second image and the third image are rotated, one pixel is set in the first image, the second image and the third image as a rotation axis, so that the position of the reference point is subtracted from the position of each other pixel in the image to obtain the position vector of each other pixel in the image, that is, the position vector of each other pixel in the image can be regarded as a vector starting from the reference point.
Specifically, taking the test target as a central cross and the placement position of the camera module as 45 degrees for example, as shown in fig. 4(a), fig. 4(a) corresponds to the image before rotation, setting the C pixel in the image as a rotation axis and the C pixel as a position C, and then taking the C pixel as a reference point, the position vector of each pixel except the C pixel in the image can be obtained, for example, taking the P pixel as an example, and taking the P pixel as a position P, then the position vector v of the P pixel is (P-C) (x); y ].
S112: and setting a rotation matrix based on the included angle between the direction of the test target image in the first image, the second image and the third image and the direction of the actual test target.
Specifically, an included angle between a direction of a test target image in the first image, the second image, and the third image and a direction of an actual test target is a, and the first image, the second image, and the third image need to be rotated by an angle a counterclockwise, so that the direction of the test target image in the first image, the second image, and the third image is consistent with the direction of the actual test target, at this time, the rotation matrix R may be represented as:
Figure BDA0003399697410000101
it should be noted that, the sequence of executing steps S111 and S112 is not limited in this application, and is determined as the case may be.
S113: multiplying the position vectors of other pixel points except the reference point in the first image, the second image and the third image by the rotation matrix respectively to obtain the rotated position vectors of other pixel points except the reference point in the first image, the second image and the third image;
s114: and obtaining the positions of the pixel points of the first image, the second image and the third image after rotation based on the rotated position vectors of the pixel points except the reference point in the first image, the second image and the third image and the positions of the reference point.
Specifically, the positions of the pixels in the images before and after the image rotation are described by taking the P pixels shown in fig. 4(a) and 4(b) as examples, where fig. 4(a) corresponds to the image before the rotation and fig. 4(b) corresponds to the image after the rotation. As is known from the foregoing, before rotation, if the position of the P pixel is P, the reference point is set as a C pixel, and the position of the C pixel is C, then the position vector v of the P pixel is (P-C) ═ x; y ], after rotation, the position vector v' of the P pixel point is:
v'=R*v (2)
after rotation, the position pp of the P pixel point is:
pp=c+R*(p-c) (3)
s115: assigning the pixel values corresponding to the pixel point positions of the first image, the second image and the third image before rotation to the pixel values corresponding to the pixel point positions of the first image, the second image and the third image after rotation, and completing the rotation of the first image, the second image and the third image so that the direction of the test target image in the first image, the second image and the third image is consistent with the direction of an actual test target.
For example, as shown in fig. 4(a) and 4(b), the pixel values of the P positions before rotation (corresponding to the P pixels before rotation in fig. 4 (a)) in the first image, the second image and the third image are assigned to the pixel values of the pp positions after rotation (corresponding to the P pixels after rotation in fig. 4 (b)), and so on, the pixel values corresponding to the pixel positions before rotation of the first image, the second image and the third image are assigned to the pixel values corresponding to the pixel positions after rotation of the first image, the second image and the third image, and the rotation of the first image, the second image and the third image is completed, and enabling the direction of the test target image in the first image, the second image and the third image to be consistent with the direction of an actual test target.
It should be noted that, the pixel value of the pixel point indicates the color depth of the pixel point, and for the gray map, the pixel value of each pixel point is between 0 and 255, 0 is black, 255 is white, and the middle value is some different levels of gray.
It should be noted that the above-mentioned rotating the first image, the second image and the third image is to rotate the first image, the second image and the third image as a whole, thus, the first image, the second image and the third image are rectangles placed on the front side before rotation, including an image portion (i.e., image light portion) and a background portion (image dark portion) of the test object, as shown in fig. 4(a), the image is likely to be no longer placed in front, and at this time, the rotated image needs to be supplemented into a rectangle placed in front, as shown in fig. 4(b), the pixel value of each pixel point of the compensated region is assigned to 0 to obtain the image shown in fig. 4(c), as can be seen, the compensated region is also a dark part of the image, and the subsequent image processing process is not influenced.
S200: and respectively carrying out binarization processing on the first image, the second image and the third image, and determining image areas of test targets in the first image, the second image and the third image.
It should be noted that, because the image shot by the camera module in the shake state and with the anti-shake function turned off will have a larger virtual image, since these virtual images tend to show a continuous trend in the gray level of the image, in order to effectively segment the boundary between the bright portion and the dark portion of the image, it is necessary to perform binarization processing on the first image, the second image and the third image, determine the image areas of the test object in the first image, the second image and the third image, and then calculating the lengths of the image areas of the test targets in the first image, the second image and the third image along the shaking direction of the camera module to obtain the shaking suppression ratio of the camera module, and evaluating the anti-shaking effect of the camera module.
Specifically, in an embodiment of the present application, the performing binarization processing on the first image, the second image, and the third image, respectively, and determining the image areas of the test targets in the first image, the second image, and the third image includes:
s210: and carrying out graying processing on the first image, the second image and the third image respectively to obtain a first grayscale image, a second grayscale image and a third grayscale image in sequence.
It should be noted that, although the first image, the second image and the third image are obtained by shooting the test target by the camera module in a darkroom environment and present grayscale images, the first image, the second image and the third image are still stored in 3 channels, in order to facilitate image processing, graying processing is performed on the first image, the second image and the third image, so that pixel point information of the 3 channels in the first image, the second image and the third image is converted into pixel point information of a single channel, and the pixel point information includes positions and pixel values of pixel points.
It should be further noted that the pixel value of each pixel point in the first gray image, the second gray image, and the third gray image is also referred to as a gray value, which represents the color depth of the pixel point in the gray image, and the range is generally from 0 to 255, white is 255, black is 0, and the intermediate value is some gray with different levels.
S220: and respectively carrying out binarization processing on the first gray level image, the second gray level image and the third gray level image so as to divide each pixel point in the first gray level image, the second gray level image and the third gray level image into a first type pixel point and a second type pixel point, wherein the pixel value of the first type pixel point is set to be 255, and the pixel value of the second type pixel point is set to be 0.
It should be noted that, the pixel values of the pixels in the first gray-scale image, the second gray-scale image and the third gray-scale image are values between 0 and 255, and the binarization processing is performed on the first gray-scale image, the second gray-scale image and the third gray-scale image respectively, that is, 256 luminance-level pixels in the first gray-scale image, the second gray-scale image and the third gray-scale image are selected through a proper threshold, all the pixels having pixel values larger than the threshold are set to 255, and all the pixels having pixel values smaller than the threshold are set to 0, so as to obtain a binarized image which can still reflect the whole and local features of the image. It can be seen that how to select appropriate threshold values for the 256 pixels with brightness levels in the first gray scale image, the second gray scale image, and the third gray scale image is a key for performing reasonable binarization processing on the first gray scale image, the second gray scale image, and the third gray scale image, and an image obtained by binarizing the first gray scale image, the second gray scale image, and the third gray scale image is a key for subsequently calculating the anti-shake suppression ratio of the camera module.
Specifically, in an embodiment of the present application, the performing binarization processing on the first gray scale image, the second gray scale image, and the third gray scale image respectively to divide each pixel point in the first gray scale image, the second gray scale image, and the third gray scale image into a first type of pixel point and a second type of pixel point includes:
s221: selecting at least two clustering centers for clustering by taking the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image as clustering data, and dividing the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image into a plurality of clustering clusters;
s222: and integrating all cluster clusters containing pixel values larger than a preset threshold value into a first class cluster, and integrating the rest cluster clusters into a second class cluster, wherein pixel points corresponding to the pixel values in the first class cluster belong to first class pixel points, and pixel points corresponding to the pixel values in the second class cluster belong to second class pixel points.
Therefore, in this embodiment, in step S221, the pixel values of the pixel points in the first gray scale image, the second gray scale image, and the third gray scale image are divided into a plurality of cluster clusters, that is, the pixel values of the pixel points in the first gray scale image, the second gray scale image, and the third gray scale image are clustered first, so that the similar pixel values in the first gray scale image, the second gray scale image, and the third gray scale image are assigned to the same cluster, then, in step S222, the cluster clusters having pixel values greater than the preset threshold are integrated into a first cluster by presetting a certain threshold, and the rest cluster clusters are integrated into a second cluster, so that the pixel point corresponding to each pixel value in the first cluster belongs to the first pixel point, and the pixel point corresponding to each pixel value in the second cluster belongs to the second pixel point, setting the pixel value of each first-type pixel point to be 255, and setting the pixel value of each second-type pixel point to be 0 corresponding to a bright part in an image, namely an image area of the test target, and corresponding to a dark part in the image.
Optionally, a K-means clustering (K-means) algorithm may be used to cluster pixel values of each pixel point in the first gray scale image, the second gray scale image, and the third gray scale image.
S230: and based on the positions of all pixel points in the first image, the second image and the third image, selecting the maximum connected region of the first type pixel points in the first image, the second image and the third image, and determining the selected maximum connected region as the image region of the test target in the first image, the second image and the third image.
It should be noted that, in the first image, the second image and the third image, there may be noise points whose pixel values are greater than a preset threshold value, but which do not constitute the image area of the test target, and obviously, these noise points also belong to the first type of pixels, but these noise points are usually discrete points located at the boundary of the image area of the test object, and therefore, in step S230, based on the positions of the pixels in the first image, the second image and the third image, selecting the maximum connected region of the first type pixel points in the first image, the second image and the third image, determining the selected maximum connected region as the image region of the test target in the first image, the second image and the third image, thereby excluding noise points whose pixel values are greater than a preset threshold value and do not constitute the image area of the test target.
S300: and respectively calculating the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the lengths as a first length, a second length and a third length.
Specifically, in an embodiment of the present application, calculating lengths of the image areas of the test target in the first direction in the first image, the second image, and the third image, respectively, and sequentially setting the lengths as a first length, a second length, and a third length includes:
and respectively calculating the number of pixel points of the image areas of the test target in the first direction in the first image, the second image and the third image, taking the pixel points as the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the pixel points as the first length, the second length and the third length.
On the basis of the foregoing embodiment, optionally, in an embodiment of the present application, calculating the number of pixels in the first direction in the image area of the test target in the first image, the second image, and the third image respectively includes:
and respectively calculating the number of pixel points in the longest distance of the image areas of the test target in the first image, the second image and the third image along the first direction, and taking the number as the number of the pixel points in the first direction of the image areas of the test target in the first image, the second image and the third image.
Specifically, the test object is taken as a circle, in the first image, the image area of the test object is a circle, as shown in fig. 5(a), and in the third image, since the camera module shakes along the first direction and the shaking function thereof is turned off, the image area of the test object is elongated along the first direction, the image area of the test object is similar to an ellipse, as shown in fig. 5(b), at this time, a circumscribed circle of the image area of the test object can be obtained, and the diameter distance of the circumscribed circle is the longest distance L of the image area of the test object along the first direction in the third imagemaxAnd then, the number of pixels in the longest distance along the first direction in the image area of the test target in the third image can be calculated and obtained as the number of pixels in the first direction in the image area of the test target in the third image. Similarly, the circumscribed circle of the image area of the test target in the second image may also be obtained, and the number of pixels at the distance of the diameter of the circumscribed circle of the image area of the test target in the second image is calculated as the number of pixels at the longest distance along the first direction of the image area of the test target in the second image
If the test target is in the shape of a cross center or a rectangle, as shown in fig. 4(c), the number of pixels in the longest distance along the first direction in the image area of the test target in the first image, the second image, and the third image may be directly calculated.
Optionally, in another embodiment of the present application, calculating the number of pixels in the first direction in the image area of the test target in the first image, the second image, and the third image respectively includes:
and respectively calculating the number of pixel points included in the image areas of the test target in the first image, the second image and the third image in a unit distance in a second direction, wherein the number of the pixel points is used as the number of the pixel points of the image areas of the test target in the first image, the second image and the third image along the first direction, and the second direction is perpendicular to the first direction.
It should be noted that, when the test target is in a shape such as a cross or a rectangle at the center, in the second image and the third image, the image areas of the test target at different positions in the second direction are extended to a corresponding extent in the image areas of the test target caused by the shake of the camera module, so that the number of pixels in the longest distance along the first direction of the image areas of the test target in the first image, the second image and the third image can be calculated as the number of pixels in the first direction of the image areas of the test target in the first image, the second image and the third image, or the number of pixels in the second direction of the image areas of the test target in the first image, the second image and the third image, and the number of pixel points of the image areas of the test target in the first image, the second image and the third image along the first direction is used as the number of the pixel points.
When the test target is in a shape such as a circle or an ellipse, in the second image and the third image, at different positions of the image area of the test target in the second direction, because the degrees of elongation of the corresponding portions in the image area of the test target caused by the shake of the camera module are different, the number of pixels included in the image area of the test target in the first image, the second image, and the third image in the unit distance in the second direction is not suitable as the number of pixels of the image area of the test target in the first direction in the first image, the second image, and the third image, but the number of pixels of the image area of the test target in the first image, the second image, and the third image in the longest distance in the first direction should be calculated, and the number of pixel points of the image areas of the test target in the first direction in the first image, the second image and the third image is used as the number of the pixel points.
On the basis of the foregoing embodiment, optionally, in an embodiment of the present application, calculating the number of pixels included in the image area of the test target in the first image, the second image, and the third image in the unit distance in the second direction respectively includes:
s310: selecting a rectangular frame area with a preset size from the first image, the second image and the third image, wherein the rectangular frame area surrounds the image area of the test target, and the width of the rectangular frame area along the second direction is equal to the maximum width of the image area of the test target along the second direction.
Specifically, taking the test target as a center cross as an example, as shown in fig. 6(a), a rectangular frame area with a preset size is selected from the images obtained by binarizing the first image, the second image, and the third image, and it can be seen that the rectangular frame area surrounds the image area of the test target, and the width of the rectangular frame area along the second direction is equal to the maximum width of the image area of the test target along the second direction.
S320: and averagely cutting the rectangular frame areas selected from the first image, the second image and the third image into a plurality of sub-rectangular frame areas in the second direction.
For example, as shown in fig. 6(b), the rectangular frame region selected from the first image, the second image, and the third image is equally divided into 3 sub-rectangular frame regions (1), (2), and (3) in the second direction, and each sub-rectangular frame region has a length L of 2501 and a width W of 301.
S330: and respectively calculating the number of pixel points with the pixel value of 255 in each section of sub-rectangular frame region in the first image, the second image and the third image.
Specifically, pixel points are sequentially traversed in each segment of sub-rectangular frame region in the first image, the second image and the third image, and the number of the pixel points with the pixel value of 255 in each segment of sub-rectangular frame region in the first image, the second image and the third image is counted, that is, the area of the bright region in each segment of sub-rectangular frame region in the first image, the second image and the third image, that is, the image region part corresponding to the test target is counted. For example, as shown in fig. 6(b), the sub-rectangular frame regions (1), (2), and (3) are sequentially traversed by the pixels, and the number of pixels having a pixel value of 255 in the sub-rectangular frame regions (1), (2), and (3) is counted as S1, S2, and S3.
S340: obtaining an average value of the number of pixels with pixel values of 255 contained in each segment of sub-rectangular frame region in the first image, the second image and the third image based on the number of pixels with pixel values of 255 contained in each segment of sub-rectangular frame region in the first image, the second image and the third image;
s350: dividing the average value of the number of pixels with the pixel value of 255 contained in each segment of sub-rectangular frame region in the first image, the second image and the third image by the width of the sub-rectangular frame region in the second direction to obtain the number of pixels contained in the image region of the test target in the first image, the second image and the third image in the unit distance in the second direction.
Specifically, the description is continued by taking an example of dividing the rectangular frame region selected from the first image, the second image, and the third image into 3 sub-rectangular frame regions (1), (2), and (3) on average in the second direction, where the length L of each sub-rectangular frame region is 2501, the width W of each sub-rectangular frame region is 301, and the sub-rectangular frame regions are divided into three sub-rectangular frame regions(1) The numbers of the pixels having a pixel value of 255 in (2), (3) are sequentially denoted as S1, S2, and S3, and the numbers of the pixels included in the unit distance in the second direction in the image areas of the test target in the first image, the second image, and the third image are calculated as
Figure BDA0003399697410000171
Can be expressed as:
Figure BDA0003399697410000172
as can be seen from the formula (4), the number of pixels included in the image area of the test target in the first image, the second image, and the third image in the unit distance in the second direction is equal to the number of pixels included in the image area of the test target in the second direction
Figure BDA0003399697410000173
That is, the average number of pixels along the first direction corresponding to the image areas of the test target in the first image, the second image, and the third image within the unit distance in the second direction, that is, the average length along the first direction corresponding to the image areas of the test target in the first image, the second image, and the third image within the unit distance in the second direction.
Of course, optionally, in another embodiment of the present application, the calculating the number of pixels included in the image area of the test target in the first image, the second image, and the third image in the unit distance in the second direction respectively includes:
s360: counting the number of total pixel points contained in the image areas of the test target in the first image, the second image and the third image respectively;
s370: dividing the total number of pixel points included in the image areas of the test targets in the first image, the second image and the third image by the total width of the image areas of the test targets in the first image, the second image and the third image in the second direction to obtain the number of pixel points included in the image areas of the test targets in the first image, the second image and the third image in the unit distance in the second direction.
Specifically, for any one of the first image, the second image, and the third image, in step S260, the total number of pixels included in the video area of the test target in the image is counted as S0, and then, in step S370, the total number of pixels included in the video area of the test target in the image is divided by the total width W of the video area of the test target in the image in the second direction S00As shown in fig. 6(a), the number of pixels included in the image area of the test target in the second direction within the unit distance
Figure BDA0003399697410000181
Can be expressed as:
Figure BDA0003399697410000182
it should be noted that, each of the above embodiments is described by taking the first direction as a horizontal direction as an example, if the first direction is a vertical direction, that is, the first direction and the second direction are interchanged, the image areas of the test targets in the second image and the third image are elongated in the second direction, at this time, the number of pixels in the longest distance along the second direction of the image areas of the test targets in the first image, the second image, and the third image should be respectively calculated, or the number of pixels included in the unit distance along the first direction of the image areas of the test targets in the first image, the second image, and the third image should be respectively calculated as the number of pixels along the second direction of the image areas of the test targets in the first image, the second image, and the third image, further as the lengths of the image areas of the test object in the first image, the second image and the third image along the second direction.
S400: and obtaining the shake suppression ratio of the camera module based on the first length, the second length and the third length, and evaluating the anti-shake effect of the camera module.
Specifically, in an embodiment of the present application, obtaining the jitter suppression ratio of the camera module based on the first length, the second length, and the third length includes:
s410: obtaining a ratio of a difference between the third length and the first length to a difference between the second length and the first length based on the first length, the second length, and the third length;
s420: and obtaining the jitter suppression ratio of the camera module based on the ratio of the difference between the third length and the first length to the difference between the second length and the first length.
Specifically, in this embodiment, the first length is denoted as xstandardThe length of the test target image shot by the camera module in the static state in the first direction is represented, and the second length is recorded as xois_onThe length of the test target image shot by the camera module in the shaking state and the optical anti-shaking function of the camera module in the first direction is represented, and the third length is recorded as xois_offAnd characterizing the length of the test target image in the first direction when the camera module is in a shaking state and the optical anti-shaking function of the camera module is turned off, so that the difference (x) between the third length and the first lengthois_off-xstandard) The length of the camera module for lengthening the image of the test target in the shaking state and the difference (x) between the second length and the first length when the anti-shaking function of the camera module is turned off are representedois_on-xstandard) And characterizing the length of the image lengthening of the test target when the camera module is in a shaking state and the anti-shaking function of the camera module is turned on, wherein the shaking suppression ratio CR of the camera module can be expressed as:
Figure BDA0003399697410000191
as can be seen from equation (6), the difference (x) between the third length and the first lengthois_off-xstandard) And the difference (x) between the second length and the first lengthois_on-xstandard) The larger the ratio (x) is, that is, the longer the length of the image of the test target is elongated when the camera module is in the shake state and the anti-shake function of the camera module is off, and the larger the ratio (x) is, the larger the difference is, the first length, the second length, and the first lengthois_on-xstandard) The smaller the image length of the camera module is, namely the longer the image length of the test target is, the closer the image shot by the camera module in the shaking state is to the image shot by the test target in the static state, the better the anti-shake effect of the camera module is.
From the above analysis, the shake suppression ratio CR can be used as an objective evaluation criterion value for evaluating the shake prevention effect of the camera module, and a larger shake suppression ratio CR indicates a better shake prevention effect of the camera module, and conversely indicates a poorer shake prevention effect of the camera module.
The embodiment of the present application further provides an evaluation apparatus for camera module anti-shake effect, as shown in fig. 7, the apparatus includes:
the image acquiring unit 10 is configured to acquire a first image, a second image and a third image of a test target, which are shot by a camera module in a darkroom environment, wherein the first image is shot by the camera module in a static state, and the second image and the third image are respectively shot by the camera module in a state that the camera module shakes along a first direction when an anti-shake function of the camera module is turned on and turned off;
an image processing unit 20, wherein the image processing unit 20 performs binarization processing on the first image, the second image and the third image respectively, and determines image areas of test targets in the first image, the second image and the third image;
a calculating unit 30, wherein the calculating unit 30 calculates lengths of the image areas of the test target in the first, second and third images along the first direction, and sequentially sets the lengths as a first length, a second length and a third length;
and the processing unit 40 is used for obtaining the shake suppression ratio of the camera module on the basis of the first length, the second length and the third length, and evaluating the anti-shake effect of the camera module by using the processing unit 40.
It can be understood that, when the camera module shoots the test target, if the placement position of the camera module is 90 degrees, that is, the camera module shoots the test target directly, the direction of the test target image in the shot image is consistent with the direction of the actual test target, but if the placement position of the camera module is not 90 degrees, at this time, an included angle exists between the direction of the test target image in the shot image and the direction of the actual test target, and correction is needed. Therefore, on the basis of the above-mentioned embodiment, in an embodiment of the present application, as shown in fig. 8, if there is an angle between the direction of the test target image in the first image, the second image and the third image and the direction of the actual test target, the apparatus is between the image acquiring unit 10 and the image processing unit 20, and further includes:
an image rectification unit 50, configured to rotate the first image, the second image, and the third image, respectively, so that directions of test target images in the first image, the second image, and the third image are consistent with a direction of an actual test target.
Since each process of evaluating the anti-shake effect of the camera module is described in detail in any of the foregoing embodiments, details are not described here.
To sum up, the method and the device for evaluating the anti-shake effect of the camera module, provided by the embodiment of the present application, first obtain images of a test target, which are shot by the camera module in a static state, in a shake state along a first direction and when the anti-shake function is turned on, and in a shake state along the first direction and when the anti-shake function is turned off, respectively, in a darkroom environment; then, carrying out binarization processing on the three images respectively, and determining image areas of test targets in the three images; secondly, respectively calculating the lengths of the image areas of the test target in the three images along the first direction; finally, based on the length of the image area of the test target in the three images along the first direction, the shake suppression ratio of the camera module is obtained and is used for evaluating the anti-shake effect of the camera module.
In addition, the method and the device for evaluating the anti-shake effect of the camera module further integrate data measurement and effect evaluation to form a comprehensive platform, the experimental environment is easy to build, the evaluation result is objective and reliable, and the OIS anti-shake effect of the mobile phone camera with various models and various anti-shake algorithms can be effectively evaluated.
All parts in the specification are described in a mode of combining parallel and progressive, each part is mainly described to be different from other parts, and the same and similar parts among all parts can be referred to each other.
In the above description of the disclosed embodiments, features described in various embodiments in this specification can be substituted for or combined with each other to enable those skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method for evaluating the anti-shake effect of a camera module is characterized by comprising the following steps:
acquiring a first image, a second image and a third image of a test target shot by a camera module in a darkroom environment, wherein the first image is an image shot by the camera module in a static state, and the second image and the third image are respectively images shot by the camera module in a state of shaking along a first direction when an anti-shaking function of the camera module is turned on and turned off;
respectively carrying out binarization processing on the first image, the second image and the third image, and determining image areas of test targets in the first image, the second image and the third image;
respectively calculating the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the lengths as a first length, a second length and a third length;
and obtaining the shake suppression ratio of the camera module based on the first length, the second length and the third length, and evaluating the anti-shake effect of the camera module.
2. The method according to claim 1, wherein the direction of the test target image in the first image, the second image and the third image has an angle with the direction of the actual test target, and before performing binarization processing on the first image, the second image and the third image, the method further comprises:
and respectively rotating the first image, the second image and the third image to enable the direction of the test target image in the first image, the second image and the third image to be consistent with the direction of an actual test target.
3. The method of claim 2, wherein rotating the first image, the second image, and the third image, respectively, such that the orientation of the test target imagery in the first image, the second image, and the third image coincides with the orientation of the actual test target comprises:
respectively setting a pixel point in the first image, the second image and the third image as a rotating shaft, and taking the pixel point as a reference point to obtain position vectors of other pixel points except the reference point in the first image, the second image and the third image;
setting a rotation matrix based on an included angle between the direction of the test target image in the first image, the second image and the third image and the direction of an actual test target;
multiplying the position vectors of other pixel points except the reference point in the first image, the second image and the third image by the rotation matrix respectively to obtain the rotated position vectors of other pixel points except the reference point in the first image, the second image and the third image;
obtaining the positions of the pixel points of the first image, the second image and the third image after rotation based on the rotated position vectors of the pixel points except the reference point in the first image, the second image and the third image and the positions of the reference point;
assigning the pixel values corresponding to the pixel point positions of the first image, the second image and the third image before rotation to the pixel values corresponding to the pixel point positions of the first image, the second image and the third image after rotation, and completing the rotation of the first image, the second image and the third image so that the direction of the test target image in the first image, the second image and the third image is consistent with the direction of an actual test target.
4. The method according to claim 1, wherein the binarizing processing is performed on the first image, the second image and the third image, respectively, and the determining the shadow area of the test target in the first image, the second image and the third image comprises:
performing graying processing on the first image, the second image and the third image respectively to obtain a first grayscale image, a second grayscale image and a third grayscale image in sequence;
respectively carrying out binarization processing on the first gray level image, the second gray level image and the third gray level image so as to divide each pixel point in the first gray level image, the second gray level image and the third gray level image into a first type pixel point and a second type pixel point, wherein the pixel value of the first type pixel point is set to be 255, and the pixel value of the second type pixel point is set to be 0;
and based on the positions of all pixel points in the first image, the second image and the third image, selecting the maximum connected region of the first type pixel points in the first image, the second image and the third image, and determining the selected maximum connected region as the image region of the test target in the first image, the second image and the third image.
5. The method according to claim 4, wherein performing binarization processing on the first gray scale image, the second gray scale image and the third gray scale image respectively to divide each pixel point in the first gray scale image, the second gray scale image and the third gray scale image into a first type pixel point and a second type pixel point comprises:
selecting at least two clustering centers for clustering by taking the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image as clustering data, and dividing the pixel values of all the pixel points in the first gray level image, the second gray level image and the third gray level image into a plurality of clustering clusters;
and integrating all cluster clusters containing pixel values larger than a preset threshold value into a first class cluster, and integrating the rest cluster clusters into a second class cluster, wherein pixel points corresponding to the pixel values in the first class cluster belong to first class pixel points, and pixel points corresponding to the pixel values in the second class cluster belong to second class pixel points.
6. The method of claim 1, wherein calculating lengths of the image areas of the test object in the first direction in the first image, the second image and the third image respectively, and setting the lengths as the first length, the second length and the third length in sequence comprises:
and respectively calculating the number of pixel points of the image areas of the test target in the first direction in the first image, the second image and the third image, taking the pixel points as the lengths of the image areas of the test target in the first image, the second image and the third image along the first direction, and sequentially setting the pixel points as the first length, the second length and the third length.
7. The method of claim 6, wherein calculating the number of pixels in the first direction in the image area of the test object in the first image, the second image, and the third image respectively comprises:
and respectively calculating the number of pixel points in the longest distance of the image areas of the test target in the first image, the second image and the third image along the first direction, and taking the number as the number of the pixel points in the first direction of the image areas of the test target in the first image, the second image and the third image.
8. The method of claim 6, wherein calculating the number of pixels in the first direction in the image area of the test object in the first image, the second image, and the third image respectively comprises:
and respectively calculating the number of pixel points included in the image areas of the test target in the first image, the second image and the third image in a unit distance in a second direction, wherein the number of the pixel points is used as the number of the pixel points of the image areas of the test target in the first image, the second image and the third image along the first direction, and the second direction is perpendicular to the first direction.
9. The method of claim 1, wherein obtaining the jitter suppression ratio of the camera module based on the first length, the second length, and the third length comprises:
obtaining a ratio of a difference between the third length and the first length to a difference between the second length and the first length based on the first length, the second length, and the third length;
and obtaining the jitter suppression ratio of the camera module based on the ratio of the difference between the third length and the first length to the difference between the second length and the first length.
10. An evaluation device for camera module anti-shake effect, comprising:
the camera module comprises an image acquisition unit, a processing unit and a control unit, wherein the image acquisition unit is used for acquiring a first image, a second image and a third image which are shot by the camera module on a test target in a darkroom environment, the first image is shot by the camera module on the test target in a static state, and the second image and the third image are respectively shot by the camera module on and off of an anti-shake function of the camera module in a shake state along a first direction;
an image processing unit, which respectively performs binarization processing on the first image, the second image and the third image, and determines image areas of test targets in the first image, the second image and the third image;
a calculating unit, which calculates lengths of image areas of the test target in the first image, the second image and the third image along the first direction, and sets the lengths as a first length, a second length and a third length in sequence;
and the processing unit is used for obtaining the jitter suppression ratio of the camera module based on the first length, the second length and the third length and evaluating the anti-jitter effect of the camera module.
11. The apparatus according to claim 10, wherein the direction of the image of the test object in the first image, the second image and the third image forms an angle with the direction of the actual test object, and the apparatus is between the image acquiring unit and the image processing unit, further comprising:
and the image correction unit is used for respectively rotating the first image, the second image and the third image so that the direction of a test target image in the first image, the second image and the third image is consistent with the direction of an actual test target.
CN202111494512.4A 2021-12-08 2021-12-08 Method and device for evaluating anti-shake effect of camera module Pending CN114205580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111494512.4A CN114205580A (en) 2021-12-08 2021-12-08 Method and device for evaluating anti-shake effect of camera module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111494512.4A CN114205580A (en) 2021-12-08 2021-12-08 Method and device for evaluating anti-shake effect of camera module

Publications (1)

Publication Number Publication Date
CN114205580A true CN114205580A (en) 2022-03-18

Family

ID=80651397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111494512.4A Pending CN114205580A (en) 2021-12-08 2021-12-08 Method and device for evaluating anti-shake effect of camera module

Country Status (1)

Country Link
CN (1) CN114205580A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174807A (en) * 2022-06-28 2022-10-11 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium
CN115314634A (en) * 2022-06-28 2022-11-08 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174807A (en) * 2022-06-28 2022-10-11 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium
CN115314634A (en) * 2022-06-28 2022-11-08 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium
CN115314634B (en) * 2022-06-28 2024-05-31 上海艾为电子技术股份有限公司 Anti-shake detection method, device, terminal equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US20230262324A1 (en) Real time assessment of picture quality
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN108898567B (en) Image noise reduction method, device and system
JP6742732B2 (en) Method for generating HDR image of scene based on trade-off between luminance distribution and motion
AU2019326496A1 (en) Method for capturing images at night, apparatus, electronic device, and storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
US9288392B2 (en) Image capturing device capable of blending images and image processing method for blending images thereof
US8554011B2 (en) Automatic exposure correction of images
CN114205580A (en) Method and device for evaluating anti-shake effect of camera module
US7764844B2 (en) Determining sharpness predictors for a digital image
CN108156369B (en) Image processing method and device
US20170345131A1 (en) Method and device for image noise estimation and image capture apparatus
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN113643214B (en) Image exposure correction method and system based on artificial intelligence
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
US20230276034A1 (en) Method and system for adjusting projection dithering
CN117541578B (en) High-performance full-view angle liquid crystal display screen detection method and system
CN112800850A (en) Video processing method and device, electronic equipment and storage medium
CN111970405A (en) Camera shielding detection method, storage medium, electronic device and device
CN110708463B (en) Focusing method, focusing device, storage medium and electronic equipment
CN110519513A (en) Anti-fluttering method and device, electronic equipment, computer readable storage medium
CN112351197B (en) Shooting parameter adjusting method and device, storage medium and electronic equipment
CN112019749B (en) Camera adjusting method and device
WO2021078276A1 (en) Method for obtaining continuously photographed photos, smart terminal, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination