CN117611488A - Image data processing method, device, electronic equipment and readable storage medium - Google Patents

Image data processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117611488A
CN117611488A CN202311581285.8A CN202311581285A CN117611488A CN 117611488 A CN117611488 A CN 117611488A CN 202311581285 A CN202311581285 A CN 202311581285A CN 117611488 A CN117611488 A CN 117611488A
Authority
CN
China
Prior art keywords
image
picture
calibration
gray
gray value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311581285.8A
Other languages
Chinese (zh)
Inventor
袁晓龙
王海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311581285.8A priority Critical patent/CN117611488A/en
Publication of CN117611488A publication Critical patent/CN117611488A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image data processing method, an image data processing device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, wherein the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values exist in each calibration area, and the pixel points in the calibration areas have corresponding first picture gray values; dividing the calibration image to obtain image areas for each calibration area respectively, wherein pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas correspond to the calibration areas one by one; and obtaining a target image according to the second image gray value and the first image gray value.

Description

Image data processing method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the field of data processing, and particularly relates to an image data processing method, an image data processing device, electronic equipment and a readable storage medium.
Background
When the camera is used for shooting images, the calibration data is required to be used for processing the images acquired by the camera so as to obtain processed high-quality images.
In the related art, a light source can be placed behind a baffle plate provided with a plurality of through holes, the through holes of the baffle plate are penetrated by light to form a point light source array, the point light source array is shot by a camera to obtain a point light source array image, calibration data are obtained according to the point light source array image, and an image acquired by the camera is processed based on the calibration data to obtain a processed image.
However, diffraction phenomenon is easily generated in the through hole, so that the content of the point light source array cannot be accurately reflected by the shot point light source array image, and after the image is processed according to the calibration data acquired by the point light source array image, the difference between the obtained processed image and the actual shot object is large, that is, the image quality of the obtained processed image is low according to the method in the related art.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image data processing method, an apparatus, an electronic device, and a readable storage medium, which can solve the problem of low image quality of an acquired processed image in the related art.
In a first aspect, an embodiment of the present application provides an image data processing method, including:
shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, wherein the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values exist in each calibration area, and the pixel points in the calibration areas have corresponding first picture gray values;
Dividing the calibration image to obtain image areas respectively aiming at each calibration area, wherein pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas and the calibration areas are in one-to-one correspondence;
and obtaining the processed target image according to the second image gray value and the first image gray value.
In a second aspect, an embodiment of the present application provides an image data processing apparatus, including:
the first acquisition module is used for shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values exist in each calibration area and are distributed, and the pixel points in the calibration areas have corresponding first picture gray values;
the second acquisition module is used for dividing the calibration image to obtain image areas respectively aiming at each calibration area, pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas and the calibration areas are in one-to-one correspondence;
And the third acquisition module is used for acquiring the processed target image according to the second image gray value and the first image gray value.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the calibration image is obtained by shooting the preset calibration picture, the small hole diffraction phenomenon does not exist in the preset calibration picture, and the obtained calibration image can accurately reflect the content of the preset calibration picture. In addition, the calibration area is rectangular, and the rectangle has the characteristic of easily determining the contour line, so that the image area corresponding to the calibration area can be accurately and rapidly determined when the calibration image is segmented. Pixels with different gray values exist in the calibration area, the pixels in the calibration area have corresponding first image gray values, the pixels with different gray values exist in the image area are distributed, and the pixels in the image area have corresponding second image gray values. The corresponding relation between the image area obtained by segmentation and the calibration area in the calibration image is accurate, the image area in the calibration image can reflect the content of the corresponding calibration area, and a high-quality target image can be obtained according to the second image gray value of the calibration image and the first image gray value of the calibration area, so that the problem of low image quality of the acquired image in the related technology is solved.
Drawings
Fig. 1 is a step flowchart of an image data processing method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of another image data processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of calibration images and angular point distribution provided in an embodiment of the present application;
FIG. 4 is a schematic illustration of an image area provided in an embodiment of the present application;
FIG. 5 is a schematic view of another image area provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a preset calibration picture according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another preset calibration picture according to an embodiment of the present application;
fig. 8 is a schematic diagram of a gray-scale card picture according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a fitted curve of a gray value of a picture and a gray value of a corrected picture according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating steps of yet another image data processing method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a calibration image capture system according to an embodiment of the present application;
fig. 12 is a schematic structural view of an image data processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. Based on the examples in this application, the art is general
All other embodiments obtained by the skilled person are within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image data processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a step flowchart of an image processing method provided in an embodiment of the present application, and referring to fig. 1, the method may include the following steps:
and 101, shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture.
The preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values are distributed in each calibration area, and the pixel points in the calibration areas have corresponding first picture gray values.
Illustratively, the calibration area includes two kinds of pixel points having different gray values, and the calibration area includes a background area and a pattern. Specifically, the first pixel point of the background area has a corresponding first gray value, the pattern has a corresponding second gray value, the colors of the background area and the pattern are different, and the first gray value and the second gray value of the background area and the pattern are different. For example, the background area is black, the first gray value of the first pixel point in the background area is 0, the pattern is white, and the second gray value of the second pixel point in the pattern is 2 N -1; for another example, the background area is white, and the first gray value of the first pixel point in the background area is 2 N -1, the pattern is black, and the second gray value of the second pixel point in the pattern is 0.
Where N is the number of image bits, for example, the number of image bits may be 8 bits, and the corresponding gray value is equal to 255. For another example, the number of image bits may be 16 bits, with a corresponding gray value of 65535, and the number of image bits may be 24 bits, 32 bits, or other bit values.
Specifically, the preset calibration picture is a printed picture obtained by printing. The shape of the calibration area in the preset calibration picture can be rectangular and/or square.
Specifically, the gray value of the first picture is the gray value of the pixel point in the calibration area. Illustratively, the calibration area includes a plurality of pixels, and the first picture gray value includes a gray value of each of the plurality of pixels.
For example, the gray value of the first picture of the pixel point in the calibration area is the gray value of the pixel point in the electronic file of the preset calibration picture obtained for printing. Specifically, an electronic picture for generating a preset calibration picture is obtained, and the preset calibration picture is obtained by printing the electronic picture. The electronic picture comprises a plurality of pixel points, and each pixel point in the electronic picture has a corresponding gray value. Further, the electronic picture comprises pixel points corresponding to each calibration area respectively, and the gray value corresponding to the pixel points in the calibration area in the electronic picture is the gray value of the first picture in the step.
Illustratively, the calibration area includes a background area and a pattern distributed in the background area. Further, the gray value of the pixel point in the background area is different from the gray value of the pixel point in the pattern, and the colors of the background area and the pattern distributed in the background area are different. For example, the background area may be black in color and the pattern distributed therein may be white in color. For another example, the background area may be white in color and the pattern distributed therein may be black in color. For example, the background area may be white or black in color, and the pattern distributed therein may be: a pattern consisting of gray areas and noise points distributed over the gray areas.
In the step, shooting a preset calibration picture through shooting equipment needing calibration to obtain a calibration image. The photographing device may be a camera, a mobile phone with photographing function, or other devices with photographing function. For example, the photographing apparatus may be a single-color single-channel photographing apparatus, a Red Green Blue (RGB) three-channel photographing apparatus, or a Red Huang Huanglan (Red Yellow Yellow Blue, RYYB) four-channel photographing apparatus, and the photographing performance of the photographing apparatus is not limited herein.
The calibration image may be, for example, an image in digital format reflecting the content of the preset calibration picture. Further, the calibration image may be an image presented on a display screen of the photographing apparatus.
For example, when a preset calibration picture is shot, continuous shooting can be performed under the same exposure condition to obtain a plurality of frames of initial calibration images, and average denoising processing is performed on the plurality of frames of initial calibration images to obtain a denoised calibration image. Therefore, the influence of noise can be avoided, and the accuracy of the calibration image obtained by shooting is improved.
And 102, segmenting the calibration image to obtain image areas respectively aiming at each calibration area.
Specifically, there are pixel point distributions of different gray values in each image area, and the pixel points in the image area have corresponding second image gray values; the image areas and the calibration areas are in one-to-one correspondence.
Specifically, the image area may reflect the content of the corresponding calibration area. The outline shape of the image area is the same as that of the corresponding calibration area, and the shape of the background and the pattern in the image area is the same as that in the corresponding calibration area.
Illustratively, the image area includes a plurality of pixel points, each pixel point of the image area having a gray value. The second image gray value includes gray values of each of a plurality of pixels corresponding to the image region.
Illustratively, the image region includes a background region and a pattern distributed in the background region. Further, the gray value of the pixel point in the background area of the image area is different from the gray value of the pixel point in the pattern of the image area, and the colors of the background area of the image area and the pattern distributed in the background area are different.
For example, the image area may be a rectangle, the image area of the rectangle having corner points. By way of example, angular points of all image areas in the calibration image are obtained, contour lines corresponding to the calibration image are determined according to the angular points, the calibration image is segmented according to the contour lines, and a plurality of image areas can be obtained, wherein the image areas correspond to the calibration areas one by one.
The calibration area is a rectangle, the calibration area of the rectangle has corner points, and the electronic picture for generating the preset calibration picture is affine transformed into an image space of the calibration image based on the corner points of the calibration area and the corner points of the image area to obtain the image area corresponding to the calibration area.
And step 103, obtaining the processed target image according to the second image gray value and the first image gray value.
The second image gray value includes gray values of a plurality of pixels in the image region, and the first image gray value includes gray values of a plurality of pixels in the calibration region corresponding to the image region.
Illustratively, the image to be processed is processed according to the second image gray value and the first image gray value, and a processed target image is obtained.
For example, when a second image gray value and a first image gray value for each image area are acquired, and then the shooting equipment is used, or shooting equipment with the same performance as the shooting equipment is used for shooting operation, an area corresponding to each image area in the image to be processed is determined according to the image to be processed obtained by shooting operation, and the second image gray value and the first image gray value of the image area are used for processing the area corresponding to the image area in the image to be processed, so that a processed target image is obtained.
Illustratively, a point spread function for the image area is obtained according to the second image gray value and the first image gray value, and the image to be processed is processed according to the point spread function, so that a processed target image is obtained. Further, a point spread function is used for deblurring an area corresponding to an image area in the image to be processed, and a processed target image is obtained.
By way of example, a preset calibration picture is shot by the device, and a calibration image corresponding to the preset calibration picture is obtained, wherein the calibration image comprises a plurality of image areas. The point spread function for each image area is respectively obtained, after the equipment is used for image shooting subsequently, the image to be processed is obtained, and the image to be processed can be divided into a plurality of image areas to be processed which are in one-to-one correspondence with the image areas according to the coordinates of the pixel points in each image area. And determining a point spread function of the image area corresponding to the to-be-processed image area aiming at each to-be-processed image area, and performing deblurring processing on the to-be-processed image area by using the point spread function to obtain the processed to-be-processed image area. And (3) performing deblurring treatment on all the image areas to be treated in the image to be treated based on the point spread function of each image area to obtain a treated image to be treated, wherein the treated image is the target image in the step.
In summary, in this embodiment, a calibration image is obtained by taking a preset calibration picture, and the preset calibration picture does not have a pinhole diffraction phenomenon, so that the obtained calibration image can accurately reflect the content of the preset calibration picture. The calibration area is rectangular, and the rectangle has the characteristic of easily determining the contour line, so that the image area corresponding to the calibration area can be accurately and rapidly determined when the calibration image is segmented. The pixel points with different gray values exist in the calibration area respectively, the pixel points in the calibration area have corresponding first image gray values, the pixel points with different gray values exist in the image area are distributed, and the pixel points in the image area have corresponding second image gray values. The corresponding relation between the image area obtained by segmentation and the calibration area in the calibration image is accurate, the image area in the calibration image can reflect the content of the corresponding calibration area, and a high-quality target image can be obtained according to the second image gray value of the calibration image and the first image gray value of the calibration area, so that the problem of low image quality of the acquired image in the related technology is solved.
In addition, when the point light source array is shot to obtain the point light source array image in the related art, the plane of the lens in the camera lens is required to be parallel to the plane of the point light source array, which increases the difficulty of the point light source array image, but the embodiment does not require the plane of the lens in the lens to be parallel to the plane of the preset calibration picture, and the method is simple. In the related art, a projection image of the projection device can be obtained through a high-resolution electrically coupled camera (Charge Coupled Device, CCD), calibration data is obtained according to the projection image, and then the collected image is processed to obtain a processed image, but the method only can utilize a central view field of the projection device, and the projection device and the high-resolution CCD have the characteristic of high cost, so that the cost of obtaining a shot image is increased.
Fig. 2 is a flowchart of steps of another image data processing method according to an embodiment of the present application, and referring to fig. 2, the method may include the following steps:
step 201, shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, wherein the preset calibration picture comprises a plurality of rectangular calibration areas.
The pixel points in the calibration areas have corresponding first picture gray values.
The method of this step is described in the foregoing step 101, and will not be described here again.
Step 202, obtaining image corner points of an image area.
Specifically, the image area is rectangular. The image corner is, for example, the intersection of two adjacent contour lines in the image area. For example, the image corner of the image region may be obtained by: and identifying contour lines of each image area in the calibration image, acquiring intersection points among the intersection contour lines, and determining the intersection points as corner points of the image areas where the intersection points are located.
For example, the image areas are rectangular, and the colors of the background areas of the adjacent image areas are different, for example, the colors of the background areas of the adjacent two image areas may be white and black, respectively, or may be other different colors.
For example, the image areas with different colors form a checkerboard structure, and the image corners of each image area in the checkerboard structure are determined according to a corner detection algorithm.
In one embodiment, the calibration image is shown in FIG. 3, wherein two adjacent image regions have a common contour line, and a plurality of image regions form a checkerboard structure. Referring to fig. 3, the calibration image includes a plurality of rectangular image areas with black and white background areas, respectively.
The corner points of the image areas are exemplarily described below with reference to fig. 3: the corner points of the first image area, the second image area of the first row and the contour lines of the first image area and the second image area of the second row constitute corner point a in fig. 3. With continued reference to fig. 3, the common vertices of the first image and the second image of the second row and the first image and the second image of the third row form the corner B in fig. 3.
For example, corner detection algorithms in cross-platform computer vision libraries (Open Source Computer Vision Library, opencv) may be used to determine corner points of image regions. For example, a checkerboard corner detection algorithm may be used to determine the corners of each image area in the calibration area, where the corners of the image area are the intersection points of adjacent contour lines in the image area. Illustratively, the corner points of each of the checkerboard-shaped image regions may be determined by a Harris corner detection algorithm, a Shi-Tomasi corner detection algorithm, a FAST segment detection feature (Features from Accelerated Segment Test, FAST) corner detection algorithm, or other corner detection algorithm.
And 203, dividing the calibration image according to the image corner points to obtain image areas corresponding to each calibration area respectively.
For example, the image area is rectangular, the contour lines forming the image area can be determined according to the corner points of the image area, and the corresponding image area can be determined according to the contour lines.
For example, determining the corner point of the calibration area in the preset calibration picture, and affine transforming the electronic picture for generating the preset calibration picture into the image space of the calibration image according to the corner point of the calibration area and the corner point of the image area to obtain the image area corresponding to the calibration area.
Referring to fig. 3, the calibration image including the image area shown in fig. 3 is segmented to obtain a plurality of image areas, and in the embodiment shown in fig. 3, the plurality of image areas include an image area with a background area being black and a pattern being white random globules, and the image area is shown in fig. 4, wherein the plurality of image areas further includes an image area with a background area being white and a pattern being black random globules, and the image area is shown in fig. 5.
Step 204, performing fourier transform on the second image gray value to obtain a first fourier transform value of the second image gray value.
In this step, the second image gray value is the gray value of the pixel point in the image area. The coordinates of the pixel points in the calibration image may be used to identify the pixel points, and the second image gray value of the image area is a spatial distribution of gray values of a plurality of pixel points of the image area.
Illustratively, the spatial distribution of gray values of the pixel points is fourier transformed to obtain a first fourier transformed value.
Step 205, performing fourier transform on the first image gray value to obtain a second fourier transform value of the first image gray value.
In this step, the first image gray value is the gray value of the pixel point of the calibration area. The coordinates of the pixel points in the calibration area may be used to identify the pixel points, and the first picture gray value of the calibration area is a spatial distribution of gray values of a plurality of pixel points of the calibration area.
Illustratively, the first image gray value is transformed from a spatial domain feature to a frequency domain feature, and the frequency domain feature of the first image gray value is fourier transformed in the frequency domain to obtain a second fourier transform value.
And 206, obtaining a Fourier expression of the point spread function according to the first Fourier transform value and the second Fourier transform value.
Illustratively, the fourier expression of the point spread functionCan be in the following form:
wherein,first fourier transform value, which is the gray value of the second image,>and a second Fourier transform value which is a gray value of the first picture, wherein u and v are frequency domain variables.
Step 207, performing inverse fourier transform on the fourier expression of the point spread function to obtain the point spread function.
Illustratively, the Fourier expression is passed through a point spread functionThe following treatment was carried outObtaining a point spread function:
where ifft (·) represents the inverse fast fourier transform, where x and y are spatial domain variables.
In one embodiment, the calibrating area of the preset calibrating picture includes: a background area, and a pattern distributed in the background area.
And step 208, deblurring the image to be processed based on the point spread function to obtain a processed target image.
Specifically, the image to be processed is an unprocessed image obtained by direct shooting by the shooting device.
Illustratively, a scaling relationship of the image gray value, the picture gray value, and the point spread function is determined. And aiming at each calibration area, obtaining a corresponding point spread function according to the conversion relation, the corresponding second image gray value and the corresponding first image gray value. Illustratively, the scaling relationship may be: and obtaining the image gray value according to the product of the image gray value and the point spread function.
By way of example, the gray value of the pixel point in the image to be processed can be obtained by convolving the gray value of the pixel point in the target image corresponding to the image to be processed with the point spread function and adding the image noise, and the corresponding target image can be obtained by processing the gray value of the pixel point in the image to be processed based on the relation.
In the embodiment, the calibration image is obtained by shooting a preset calibration picture, the preset calibration picture does not have the phenomenon of pinhole diffraction, and the obtained calibration image can accurately reflect the content of the preset calibration picture. The calibration area is rectangular, the image area corresponding to the calibration area is also rectangular, the calibration areas of a plurality of rectangles in the calibration image can form a checkerboard structure, and the image corner points of the image area can be accurately determined based on the checkerboard structure. And dividing the calibration image according to the image corner points, so that an image area corresponding to the calibration area can be accurately obtained. The gray values of the pixel points in the image area are subjected to Fourier transform to obtain first Fourier transform values of gray values of the second image, the gray values of the pixel points in the calibration area are subjected to Fourier transform to obtain second Fourier transform values of gray values of the first image, the corresponding relation between the image area and the calibration area is accurate, and an accurate point spread function can be obtained based on the first Fourier transform values of the image area and the second Fourier transform values of the image area. In the related art, a point light source image is obtained by shooting a point light source array, a point diffusion function is obtained according to the point light source array image, because small holes are diffracted in through holes in the point light source array, the accuracy of the point diffusion function obtained according to the point light source array image is low, an image is processed based on the point diffusion function, the difference between the processed image and a shot object is large, and the low-quality image cannot meet the requirement of a user. The point spread function obtained by the method of the embodiment has high accuracy, and the point spread function of the method of the embodiment is used for deblurring the image to be processed, so that a high-quality target image can be obtained, and the problem of low accuracy of the point spread function obtained in the related technology and the problem of low image quality of the obtained processed image are solved.
In one embodiment, the calibrating area of the preset calibrating picture includes: a background area, and a pattern distributed in the background area. In the preset calibration picture, the colors of the background areas of the adjacent calibration areas are different.
Correspondingly, before step 201, further includes:
step 209, determining a first pixel point for calibrating a background area in the area, where the first pixel point has a corresponding first gray value.
In this step, the first pixel point of the background area in the calibration area generates a pixel point corresponding to the background area of the calibration area in the electronic picture of the preset calibration picture. The first gray value is a gray value of a pixel point corresponding to a background area of the calibration area in an electronic picture for generating a preset calibration picture.
In the preset calibration picture, the colors of the background areas of the adjacent calibration areas are different. The colors of the adjacent background areas are different, and the first gray values of the background areas of the adjacent areas are different.
Step 210, determining a second pixel point of the pattern in the calibration area, where the second pixel point has a corresponding second gray value.
In this step, the second pixel point of the pattern in the calibration area generates a pixel point corresponding to the pattern in the calibration area in the electronic picture of the preset calibration picture. And generating a gray value of a pixel point corresponding to the pattern of the calibration area in the electronic picture of the preset calibration picture.
The second gray value is different from the first gray value, and correspondingly, the background area and the pattern in the same calibration area are different in color.
Specifically, the two-dimensional fourier transform power spectrum of the second pixel point satisfies the isotropy requirement.
Specifically, the two-dimensional fourier transform power spectrum of the second gray value of the second pixel point meets the isotropy requirement.
The pattern in the calibration area is illustratively a randomly generated pattern. For example, the pattern may be a randomly generated pattern of a plurality of balls, further, the plurality of balls are randomly distributed, and there may be overlapping between the plurality of balls, and the two-dimensional fourier transform power spectrum of the pixels of the pattern formed by the plurality of balls satisfies the isotropy requirement. For another example, the pattern may include gray areas and random noise points distributed over the gray areas, further, there may be overlap between noise points, and the two-dimensional fourier transform power spectrum of the pixels of the noise point pattern satisfies the isotropy requirement.
Step 211, generating an electronic picture of the preset calibration picture according to the first pixel point and the second pixel point, and printing the electronic picture to obtain the preset calibration picture.
For example, an electronic picture of the background area is generated from the first pixel and an electronic picture of the pattern is generated from the second pixel. And setting a layer of the electronic picture of the pattern on the layer of the electronic picture of the background area, so that the pattern of the electronic picture covers a part of the area in the electronic picture of the background area, thereby obtaining the electronic picture of the preset picture, and printing the electronic picture to obtain the preset calibration picture.
The first pixel point and the second pixel point together form a pixel point of the electronic picture, and the electronic picture formed by the first pixel point and the second pixel point is printed to obtain a preset calibration picture.
In one embodiment, in the preset calibration picture, the colors of the background areas of the adjacent calibration areas are different. For example, in the preset calibration picture, the gray values of the first pixel points of the background areas of the adjacent calibration areas are different. For example, in the case of representing the gray value in 8-bit binary, the gray value of the first pixel point of the background area of the adjacent calibration area may be 0 and 255, respectively, wherein the color of the background area corresponding to the gray value 0 is black, and the color of the background area corresponding to the gray value 255 is white.
In this embodiment, the gradation value may also be expressed in binary of other number of bits, for example, the gradation value may be expressed in 16-bit binary. Specifically, in the case of 16-bit binary representation of the gradation value, the color of the background region corresponding to the gradation value 0 is black, and the color of the background region corresponding to the gradation value 65536 is white.
In the preset calibration picture, the background areas of the adjacent calibration areas are black and white respectively, and a plurality of calibration areas with black and white at intervals form a checkerboard structure. The method comprises the steps that patterns are distributed in the background area of each calibration area, wherein contour lines of the patterns are smooth and have no sharp chamfer, and if sharp chamfer exists in the contour lines of the patterns, passivation treatment is carried out on the sharp chamfer so as to obtain the patterns with smooth contour lines.
Further, for each calibration area, a distance between the pattern in the calibration area and the corner point of the calibration area is greater than or equal to a preset distance. Therefore, the problem that the distance between the pattern and the corner points of the calibration area is too short, so that the corner points of the calibration area are inaccurately determined can be avoided. The preset distance may be set according to a user requirement, for example, the preset distance may be set to 10 pixel points.
In one embodiment, as shown in fig. 6, the preset calibration picture, referring to fig. 6, the background areas of the adjacent calibration areas are black and white, the patterns distributed on the black background areas are white pellets distributed randomly, and the patterns distributed on the white background areas are black pellets distributed randomly. Wherein the two-dimensional Fourier transform power spectrum of the pixel points of the small sphere pattern meets the isotropy requirement. In this embodiment, the two-dimensional fourier transform is used to satisfy the isotropic requirement, so that the difficulty in obtaining the preset calibration picture for printing the electronic picture can be reduced.
In another embodiment, the random constraint of the pattern is white noise, and in particular, the pattern is noise points distributed in gray areas. White noise is noise having equal noise power spectral densities in frequency bands of equal bandwidths over a wide frequency range. The white noise may be gaussian white noise, bernoulli white noise, or other white noise, among others. Correspondingly, as shown in fig. 7, the preset calibration picture is shown in fig. 7, the calibration areas are rectangular, the calibration areas have corresponding angular points, the colors of the background areas of the adjacent calibration areas are black and white respectively, and gray scale patterns are arranged in the background areas and are gray areas in the calibration areas. Noise points are distributed on the gray scale pattern, and the two-dimensional Fourier transform power spectrum of the noise points meets isotropy requirements. By way of example, the white noise is white noise, and using the white noise distribution pattern as the pattern of the calibration area, it is possible to ensure that the two-dimensional fourier transform power spectrum of the white noise satisfies the isotropy requirement.
For example, calibration areas of a plurality of rectangles in a preset calibration picture form a checkerboard shape, and corner points of the calibration areas form corner points of the preset calibration picture of the checkerboard shape. Specifically, in two adjacent rectangular calibration areas, the lengths of two connected edges are equal.
The checkerboard-shaped preset calibration picture comprises a plurality of rows of calibration areas and a plurality of columns of calibration areas. The corner points of the calibration areas are vertexes shared by a plurality of adjacent calibration areas, and correspondingly, the corner points comprise a plurality of rows of corner points and a plurality of columns of corner points in a preset calibration picture of a checkerboard shape. The corner points are vertexes common to a plurality of adjacent calibration areas in the checkerboard structure.
Further, in the preset calibration picture with the checkerboard shape, the number of calibration areas in each row of calibration areas is multiple, and the number of calibration areas in each column of calibration areas is multiple. The number of the corner points of each row is multiple, and the number of the corner points of each column is multiple.
Further, the number of calibration areas in each row and the number of calibration areas in each column can be set according to the user requirements. For example, in the case where the accuracy requirement of the point spread function is relatively high, the number of the marked areas per row and per column may be set to be larger, and in the case where the accuracy requirement of the point spread function is relatively low, the number of the marked areas per row and per column may be set to be smaller.
For example, in the embodiment shown in fig. 6 and 7, in the preset calibration picture of the checkerboard shape, the number of calibration areas in each row is 8, and the number of calibration areas in each column is 6. Correspondingly, the number of corner points in each row is 7, and the number of corner points in each column is 5.
For another example, in the checkerboard structure, the number of corner points in each row and each column is greater than or equal to the corresponding preset number, and through an affine transformation algorithm, the electronic picture of the preset calibration picture can be accurately affine transformed into the image space of the calibration image so as to determine the calibration area corresponding to each image area. In the checkerboard structure, the number of corner points of each row and each column is larger than or equal to the corresponding preset number, so that affine transformation accuracy is improved, and accuracy of a determined calibration area corresponding to the image area is improved.
For example, in the preset calibration picture of the checkerboard shape, the number of calibration areas in each row may be greater than 8, and the number of calibration areas in each column may be greater than 6. Correspondingly, the number of corner points in each row may be greater than 7, and the number of corner points in each column may be greater than 5.
For another example, when a preset calibration image is shot near focus, the obtained calibration image is prone to large distortion, in this case, in order to ensure affine transformation accuracy and further ensure accuracy of determining calibration areas of an image area, the number of calibration areas in each row and each column may be increased to increase the number of corner points in each row and each column. For example, the number of marked areas per row and per column may be increased such that the number of corner points in each row is greater than or equal to 9 and the number of corner points in each column is greater than or equal to 7.
In this embodiment, the calibration area includes a background area and patterns distributed in the background area, the background area includes a first pixel, the first pixel has a corresponding first gray value, the pattern has a second pixel, the second pixel has a corresponding second gray value, the first gray value and the second gray value are different, and the two-dimensional fourier transform power spectrum of the second pixel meets the isotropy requirement, so that the two-dimensional fourier transform power spectrum of the pixel in the image area also meets the isotropy requirement in a calibration image obtained by printing an electronic image including the first pixel and the second pixel, each calibration area also includes a background area and a pattern with different gray values, and in a calibration image obtained by photographing the calibration image, the background area and the pattern of the image area also have different gray values. The two-dimensional Fourier transform power spectrums of the pattern in the calibration area and the pixel points of the pattern in the image area also meet isotropy requirements, so that the speed and the robustness of carrying out Fourier analysis on the gray values of the pixel points in the calibration area and the image area can be improved when the subsequent Fourier transform analysis is carried out on the gray values of the second image in the image area, and the processing efficiency is improved. In addition, the two-dimensional Fourier transform power spectrums of the pattern in the calibration area and the pixel points of the pattern in the image area also meet isotropy requirements, and compared with a checkerboard structure picture which does not meet the requirements, the interference of a strong periodic structure in the corresponding direction in the two-dimensional Fourier transform power spectrums can be avoided, and the accuracy of the subsequent acquisition of the point spread function is improved.
In one embodiment, the image area is rectangular, and prior to step 202, further comprises:
step 212, obtaining picture corner points of a calibration area in a preset calibration picture and image corner points of an image area in a calibration image.
The calibration areas are rectangular, and a plurality of calibration areas in the preset calibration picture form a checkerboard structure. The image areas are rectangular, and a plurality of image areas in the calibration image also form a checkerboard structure. And determining picture corner points of the calibration areas in the preset calibration picture through a checkerboard corner detection algorithm. Illustratively, the corner points of each calibration area in the calibration image of the checkerboard structure and the corner points of each image area in the calibration image of the checkerboard structure can be determined by a Harris corner point detection algorithm, a Shi-Tomasi corner point detection algorithm, a FAST corner point detection algorithm, or other corner point detection algorithms.
And step 213, determining a calibration area corresponding to the image area according to the picture corner and the image corner.
By way of example, with the picture corner points of each calibration area in the preset calibration picture matched with the picture corner points of each image area in the calibration image as constraint, affine transformation of the electronic picture for generating the preset calibration picture into the image space of the calibration image is realized, so that the matching of the calibration area in the preset calibration picture and the image area in the calibration image is realized, and the image area corresponding to the calibration area is obtained.
Specifically, the calibration image is an image obtained by shooting a preset calibration image, the content of the preset calibration image is reflected by the calibration image, the picture corner of the calibration area in the preset calibration image corresponds to the image corner of the image area in the calibration image, and the electronic image of the preset calibration image can be projected to the image space of the calibration image through an affine transformation algorithm according to the one-to-one correspondence relationship between the picture corner and the image corner, so that the calibration area corresponding to the image area is obtained.
When the image corner of the image area and the picture corner of the calibration area are determined through a corner detection algorithm, corner detection is performed based on sub-pixel precision smaller than the resolution size of the photographing equipment, so that the purpose of accurately determining the corner is achieved, the corner in the preset calibration picture is accurately matched with the corner in the calibration image, and the calibration area corresponding to the image area is accurately determined. And the angular point is determined based on the sub-pixel precision, so that the angular point of an image area in the calibration image can be accurately determined even under the condition that the calibration image is greatly distorted due to near-focus shooting. And further, the accuracy of the calibration area corresponding to the acquired image area is improved, and the accuracy of the point spread function acquired later is improved.
In this embodiment, the picture corner of the calibration area and the image corner of the image area in the calibration image are obtained, and the calibration area corresponding to the image area can be accurately and rapidly determined according to the picture corner and the image corner.
In one embodiment, prior to step 204, further comprising:
step 214, determining an initial picture gray value of the pixel point of the calibration area.
Specifically, the initial image gray value is a gray value of a pixel point in the electronic image of the calibration area.
By way of example, the preset calibration picture is obtained by printing an electronic picture of the preset calibration picture, and data of the electronic picture includes a plurality of pixel points and gray values corresponding to each pixel point. The electronic pictures of the calibration pictures comprise electronic pictures of a plurality of calibration areas, and each electronic picture of the calibration areas comprises pixel points and gray values of the pixel points. Correspondingly, for each calibration area, the gray value of the pixel point in the electronic picture of the calibration area is the initial picture gray value of the pixel point of the calibration area in the step.
Step 215, substituting the initial picture gray value into the conversion relation between the picture gray value and the corrected picture gray value to obtain the target corrected picture gray value corresponding to the initial picture gray value.
For example, the gray value of the picture in the conversion relation is generated as the gray value of the electronic picture of the preset calibration picture, and the gray value of the corrected picture is the gray value of the image obtained by shooting the preset picture.
For example, a gray value of an electronic picture for generating a preset picture (for example, a gray card picture) is obtained, a gray value of an image obtained by shooting the preset picture is obtained, and the gray value of the electronic picture and the gray value of the image are fitted to obtain a conversion relation between the gray value of the picture and the gray value of the corrected picture.
Step 216, taking the target corrected picture gray value as the first picture gray value for the calibration area.
In this embodiment, the initial image gray value is substituted into the conversion relation between the image gray value and the corrected image gray value to obtain the corresponding target corrected image gray value, and the target corrected image gray value is used as the corresponding first image gray value, so that the accuracy of the first image gray value is improved.
In this embodiment, an initial image gray value of the pixel point of the calibration area is obtained, and the first image gray value is obtained according to the initial image gray value and the conversion relation between the image gray value and the corrected image gray value, which is equivalent to correcting the initial image gray value, so that the accuracy of the obtained first image gray value is improved.
In one embodiment, before determining the target corrected picture gray value corresponding to the initial picture gray value according to the initial picture gray value and the conversion relation between the picture gray value and the corrected picture gray value in step 215, the method further includes:
and step 217, shooting a gray scale card picture to obtain a gray scale card image of the gray scale card picture.
The gray scale card picture comprises a plurality of gray scale card picture areas with different gray scale values, the gray scale card image comprises a plurality of gray scale card image areas with different gray scale values, and the gray scale card image areas and the gray scale card picture areas are in one-to-one correspondence.
Specifically, the gray value of the gray card picture is the gray value of the pixel point in the gray card picture area, and the gray value of the gray card image is the gray value of the pixel point in the gray card image area.
The number of the gray card picture areas in the gray card picture and the gray value of each gray card picture area can be set according to the user requirement. The gray-scale card picture in this step is used to determine the conversion relation between the gray-scale value of the picture and the gray-scale value of the corrected picture in the foregoing embodiment. For example, in the case where the accuracy requirement for the conversion relation is relatively high, the number of the gray-scale card picture areas may be set to be larger, and the difference between the gray-scale values of adjacent sizes may be set to be smaller; under the condition that the accuracy requirement of the conversion relation is lower, the number of the gray scale card picture areas can be set smaller, and the difference between the gray scale values of adjacent sizes is set larger.
Referring to fig. 8, in one embodiment, the gray scale card picture includes 18 gray scale card picture regions, one gray scale patch for each gray scale card picture region. When the number of image bits is 8, the smallest gray value in the 18 gray card picture areas is 0, the largest gray value is 255, and the gray value interval between adjacent gray values is 15.
For example, when a gray scale card image is photographed, the gray scale card image is imaged at the central field of view of the photographing apparatus, and gray scale values of the gray scale card image are read, whereby the gray scale card image and the gray scale values of the gray scale card image can be accurately obtained.
Step 218, taking the gray-scale card image gray-scale value as the image gray-scale value, and taking the gray-scale card image gray-scale value as the corrected image gray-scale value, fitting to obtain a conversion relation between the image gray-scale value and the corrected image gray-scale value.
For example, the gray-scale card image gray-scale value is used as the image gray-scale value, the gray-scale card image gray-scale value is used as the correction image gray-scale value, and polynomial fitting is performed on the gray-scale card image gray-scale value and the gray-scale card image gray-scale value to obtain a conversion relation between the image gray-scale value and the correction image gray-scale value.
In one embodiment, the conversion relation obtained by fitting is a linear relation, and a fitting curve corresponding to the conversion relation is shown in fig. 9, and referring to fig. 9, the fitting curve is a straight line. Correspondingly, the conversion relation is a linear description of conversion of the picture gray value and the corrected picture gray value, and the conversion relation is a linear equation taking the picture gray value and the corrected picture gray value as variables.
In this embodiment, a gray card image is obtained by capturing a gray card image, the gray card image is a captured object, the gray card image corresponds to a captured image, the gray card image gray value corresponds to a real image gray value of the captured object, and the gray image gray value corresponds to a captured image gray value. And fitting according to the gray scale card image gray scale value of the gray scale card image and the gray scale card image gray scale value of the gray scale card image to obtain a conversion relation between the image gray scale value and the corrected image gray scale value, wherein the conversion relation can accurately reflect the corresponding relation between the real image gray scale value and the photographed image gray scale value.
In the process of printing the electronic picture to obtain the preset calibration picture, the real gray value of the preset calibration picture and the gray value of the electronic picture are affected by factors such as equipment precision, errors exist, a first picture gray value is determined through the conversion relation and the initial picture gray value of the preset calibration picture, the first picture gray value is the real gray value of the preset calibration picture, and further, the point spread function is determined based on the first picture gray value, so that the accuracy of the point spread function can be improved.
In one embodiment, prior to step 204, further comprising:
step 219, obtaining an initial image gray value of a pixel point in the image area.
In this step, the initial image gradation value is a gradation value of a pixel point extracted from the image region.
And 220, shooting under a preset scene to obtain a first image.
Wherein the first image has a corresponding third image gray value.
The preset scene is, for example, a scene in which no light enters a lens of the photographing apparatus. Correspondingly, the color of the first image is approximately black.
For example, the preset scene may be a dark environment without a light source, and shooting is performed in the dark environment to obtain the first image. For another example, the preset scene may be a scene in which a lens of the photographing apparatus is masked by a black object.
Step 221, taking the first picture to obtain a second image of the second picture.
Specifically, the pixel point in the second image has a corresponding fourth image gray value.
In this step, the third image gray scale value and the fourth image gray scale value are different, and the corresponding first image and second image are different in color.
The color of the first picture is, for example, white. Further, the illumination condition of the second image obtained by shooting the first picture is the same as the illumination condition when the calibration image is obtained by shooting the preset calibration picture, and the exposure condition when the first picture is shot and the preset calibration picture is shot is the same.
Step 222, obtaining a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value and the fourth image gray value.
In one embodiment, a first difference between the initial image gray value and the third image gray value is obtained, a second difference between the fourth image gray value and the third image gray value is obtained, a ratio of the first difference to the second difference is obtained, and a second image gray value of the pixel point in the image area is obtained according to the ratio.
In this embodiment, an initial image gray value of a pixel in an image area is obtained, and a second image gray value corresponding to the initial image gray value is obtained according to a third image gray value of the first image and a fourth image gray value of the second image, where the second image gray value is a result of correcting the initial image gray value, and based on the method of this embodiment, accuracy of the second image gray value of the pixel in the image area is improved.
In one embodiment, the preset scene is a scene without a light source, and the first picture is a white picture.
Correspondingly, in step 222, according to the initial image gray value, the third image gray value, and the fourth image gray value, a second image gray value of the pixel point in the image area is obtained, which may include the following sub-steps:
Sub-step 2221 obtains a first difference between the initial image gray value and the third image gray value and a second difference between the fourth image gray value and the third image gray value.
Specifically, performing difference operation on the initial image gray value and the third image gray value to obtain a first difference value; and carrying out difference operation on the fourth image gray value and the third image gray value to obtain a second difference value.
Sub-step 2222 obtains the ratio between the first difference and the second difference.
Specifically, a ratio operation is performed on the first difference value and the second difference value, so as to obtain a ratio of the first difference value to the second difference value.
Sub-step 2223 determines a second image gray value for the pixel point in the image area based on the ratio.
In one embodiment, the number of image bits used by the capture device capturing the pre-set calibration picture in processing the image data is determined, and a second image gray value is obtained based on the ratio obtained in sub-step 2222, and the number of image bits described above.
For example, the second image gray value i (x, y) of a pixel point in the image area may be obtained according to the following formula:
wherein image (x, y) is an initial image gray value of a pixel point in the image area, black (x, y) is a third image gray value of the first image, white (x, y) is a fourth image gray value of the second image, and N is an image bit number. Where x and y represent spatial locations.
In another embodiment, the ratio obtained in the foregoing substep 2222 is used as the second image gray-scale value of the pixel point in the image area, thereby obtaining the normalized second image gray-scale value, specifically, the normalized second image gray-scale value i (x, y) is obtained according to the following formula:
in this embodiment, the preset scene is a scene without a light source, the first image acquired in the scene is approximately black, and the third image gray value of the first image is approximately 0. If the first picture is white, the second image obtained by taking the first picture is approximately white, and if the number of image bits is 8, the fourth image gray value of the second image is close to 255, and the first picture gray value of the first picture is 255. That is, the third image gray value and the fourth image gray value are close to the minimum value and the maximum value in the gray value binary representation, respectively. The first difference between the gray value of the initial image and the gray value of the third image corresponds to the difference between the gray value of the initial image and the minimum gray value obtained by shooting, the second difference between the gray value of the fourth image and the gray value of the third image corresponds to the difference between the maximum gray value obtained by shooting and the minimum gray value, and the gray value of the first picture corresponds to the theoretical value of the difference of the maximum gray value. The ratio of the first difference value to the second difference value is obtained, the gray value of the second image is obtained according to the ratio, the gray value of the initial image is corrected based on the maximum gray value and the minimum gray value obtained through shooting, and the accuracy of the gray value of the second image is improved.
Fig. 10 is a schematic diagram of another image data processing method according to an embodiment of the present application, and referring to fig. 10, the method may include the following steps:
step S1, a high-resolution electronic picture for generating a preset calibration picture is obtained, and pixel points in the electronic picture for generating the preset calibration picture have initial picture gray values.
Illustratively, a high-resolution high-definition electronic picture is generated by a computer. The high-definition electronic picture is provided with a plurality of pixel points, and each pixel point is provided with a corresponding gray value.
And S2, printing an electronic picture to obtain a high-precision preset calibration picture.
By way of example, the electronic picture is printed by a high-precision printer, and a high-precision preset calibration picture is obtained.
The preset calibration picture comprises a plurality of calibration areas, wherein the calibration areas comprise background areas and patterns distributed in the background areas.
The shape of each calibration area is rectangular, the colors of the background areas of the adjacent calibration areas are black and white respectively, and the adjacent calibration areas have common contour lines, so that each calibration area forms a checkerboard shape.
The pattern in the calibration area is a randomly generated pattern, wherein the two-dimensional Fourier transform power spectrum of the pattern pixel points has the characteristic of isotropy. The pattern may be, for example, random globules as shown in fig. 6, or white noise distribution patterns as shown in fig. 7.
Step S3, a gray card picture is obtained, and pixel points in the gray card picture have corresponding gray values of the gray card picture. And shooting a gray card picture to obtain a gray card image, wherein pixel points in the gray card image have gray values of the gray card image.
In this step, as shown in fig. 8, referring to fig. 8, the gray card picture includes 18 gray color blocks, each of which corresponds to one gray card picture region of the gray card picture. The minimum gray value in the 18 gray scale card image areas is 0, the maximum gray value is 255, and the gray scale value interval between adjacent gray scale values is 15.
And S4, fitting the gray scale value of the gray scale card picture and the gray scale value of the gray scale card image to obtain a conversion relation between the gray scale value of the picture and the gray scale value of the corrected picture.
Specifically, the gray-scale card image gray-scale value is used as the image gray-scale value, and the gray-scale card image gray-scale value is used as the corrected image gray-scale value. And linearly fitting the gray scale card image gray scale value and the gray scale card image gray scale value to obtain a conversion relation between the image gray scale value and the corrected image gray scale value.
In one embodiment, the fitting curve obtained by fitting the gray scale card image gray scale values and the gray scale card image gray scale values is shown in fig. 9.
And S5, obtaining a first picture gray value of the preset calibration picture according to a conversion relation between the picture gray value and the corrected picture gray value and an initial picture gray value of the preset calibration picture.
In this step, the variables in the conversion relation include a picture gray value and a corrected picture gray value, and the initial picture gray value is substituted into the conversion relation as a parameter value corresponding to the picture gray value to obtain a first picture gray value corresponding to the initial picture gray value.
And S6, shooting a preset calibration picture to obtain a calibration image.
The system for capturing a preset calibration picture is shown in fig. 11, and referring to fig. 11, the system includes a light source 1101a, a light source 1101b, a capturing device 1103, and a preset calibration picture 1102.
The light source 1101a and the light source 1101b generate planar light sources with uniform light beams, the light source 1101a and the light source 1101b are located at two sides of the shooting device 1103, the light emitting surfaces of the light source 1101a and the light source 1101b face the preset calibration picture 1102, and an included angle between the light emitting surface and the preset calibration picture 1102 meets a preset included angle range. The light beams projected to the preset calibration picture 1102 by the light source 1101a and the light source 1101b can uniformly illuminate the preset calibration picture. The preset included angle is in the range of 30 degrees to 60 degrees, for example.
The photographing device may be a mobile phone, a camera, or other devices with photographing means. The distance between the lens plane of the shooting device and the plane where the preset calibration picture is located is equal to the preset object distance to be calibrated. In the present embodiment, the object distance refers to a distance between the photographing apparatus and the photographing object.
When shooting is performed by using a shooting device, the distance between the shooting device and the shooting object is different, and the point spread function used for deblurring the shot image is also different. In the present application, a plurality of preset object distances may be set, and for different preset object distances, corresponding point spread functions are determined based on the methods in the foregoing embodiments, respectively.
When a preset calibration picture is shot, a tripod or a tripod head and other devices can be used for fixing shooting equipment, after automatic or manual focusing is carried out on the shooting equipment, exposure conditions are kept unchanged, continuous shooting is carried out for a plurality of times, multi-frame calibration images reflecting the content of the preset calibration picture are obtained, and then average denoising treatment is carried out on the multi-frame calibration images, so that the processed calibration images are obtained.
And S7, turning off the light source, continuously shooting to obtain a plurality of first images, and continuously shooting a white second image under the same illumination and exposure conditions as the shooting of the preset calibration image to obtain a plurality of second images.
The second picture and the preset calibration picture are the same in material.
In one embodiment, the number of frames of the first image captured is greater than 5 frames and the number of frames of the second image captured is also greater than 5 frames.
Step S8, obtaining an initial image gray value of the calibration image, a third image gray value of the first image and a fourth image gray value of the second image.
Wherein the third image gray value is close to 0, and the fourth image gray value is close to 255 when the number of image bits is 8.
Illustratively, a first image is taken in a non-light source scene and a white picture is taken to obtain a second image.
And S9, correcting the initial image gray value of the calibration image according to the third image gray value of the first image and the fourth image gray value of the second image to obtain the second image gray value of the calibration image.
Illustratively, the second image gray value i (x, y) is derived according to the following formula:
wherein image (x, y) is an initial image gray value of a pixel point in the image area, black (x, y) is a third image gray value of the first image, white (x, y) is a fourth image gray value of the second image, and N is an image bit number. Where x and y represent spatial locations.
Illustratively, if the number of image bits N is equal to 8, the second image gray value i (x, y) is:
Where image (x, y) is an initial image gray value of a pixel in the image area, black (x, y) is a third image gray value of the first image, and white (x, y) is a fourth image gray value of the second image.
Step S10, determining picture corner points of calibration areas in a preset calibration picture and image corner points of image areas in a calibration image.
And dividing the calibration image based on the image corner points to obtain a plurality of image areas. And dividing a preset calibration picture based on picture corner points to obtain a plurality of calibration areas.
Taking the image area acquisition as an example, a method for obtaining a plurality of image areas by dividing and calibrating an image based on image corners is exemplified as follows: the plurality of image areas in the calibration image form a checkerboard structure, the contour lines of the image areas form transverse lines and vertical lines for dividing the checkerboard structure, and the image corner points of the image areas correspond to the corner points in the checkerboard structure. When dividing a calibration image according to image corner points, it is necessary to ensure that horizontal lines and vertical lines in a checkerboard cannot be divided into image areas, for example, when dividing an image area with a black background area, white background area portions in adjacent image areas cannot be divided into the black image areas, so as not to affect isotropy of a two-dimensional Fourier transform power spectrum of a pattern in the image areas. For example, the pattern in the image area is a random sphere or a white noise pattern, by which arrangement the isotropy of the two-dimensional fourier transform power spectrum of the random sphere or white noise pattern can be surface-influenced.
Step S11, affine transformation of the electronic picture corresponding to the preset calibration picture to the image space of the calibration image is carried out according to the picture corner point and the image corner point, so as to obtain calibration areas corresponding to each image area respectively.
Specifically, picture corner points in the preset calibration picture are in one-to-one correspondence with image corner points in the calibration image, and based on the corresponding relation between the picture corner points and the image corner points, an affine transformation algorithm is combined to affine transform the electronic picture corresponding to the preset calibration picture to an image control of the calibration image, so that calibration areas corresponding to each image area are obtained.
Step S12, obtaining a point spread function for each image area according to the second image gray value of the image area and the first image gray value of the corresponding calibration area.
Specifically, after the imaging system of the photographing device photographs a preset calibration image to obtain the calibration image, for each image area in the calibration image, the relation between the first image gray value o (x, y), the second image gray value i (x, y), and the point spread function P (x, y) is as follows:
i(x,y)=o(x,y)×p(x,y)+n(x,y)
where n (x, y) is the nominal image noise.
In this embodiment, for the program system of the photographing apparatus, the point spread function that varies spatially convolves the first image gray value, and then the spatially distributed image noise is added to form the second image gray value. The first image gray value is a gray value of a pixel point in an electronic image used for generating a preset calibration image, the second image gray value is a gray value of a pixel point in a calibration image obtained by shooting the preset calibration image, the preset calibration image is equivalent to an original clear image, and the calibration image obtained by shooting is equivalent to a blurred image.
And S13, performing deblurring treatment on the image to be treated by using the point spread function to obtain a target image.
According to the point spread function shown in step S12, in the imaging system of the photographing apparatus, the pixel point gray value of the original clear image is convolved with the point spread function transformed with space, and the blurred image can be obtained after adding the spatially distributed image noise. In this embodiment, the expression is directed to the entire imaging system of the photographing apparatus, not just to the expression of the lens of the photographing apparatus, where the entire imaging system may include a lens, an image sensor for collecting a picture light signal, an image processor for processing a light signal, and the like.
After the point spread function is determined, the point spread function is used for deblurring the image to be processed, which is shot by the shooting equipment, so that a clear image corresponding to the image to be processed can be obtained. In the related art, an image may be photographed through a lens in a photographing apparatus, an image light signal may be acquired using an image sensor in the photographing apparatus, and a response curve of the image sensor may be energy-integrated to obtain a point spread function for the entire photographing system, but this method may have difficulty in acquiring a point spread function for a large field of view, and may only acquire a point spread function for the photographing apparatus lens, and may not reflect an influence of the image sensor on the point spread function. The point spread function obtained in the embodiment is used for deblurring an image to be processed based on the obtained point spread function for the whole shooting system of the shooting equipment, so that a high-quality target image can be obtained.
Correspondingly, according to the convolution theory, the relation between the first image gray value, the second image gray value and the point spread function in the frequency domain is as follows:
I(u,v)=O(u,v)P(u,v)+N(u,v)
wherein I (u, v), O (u, v), P (u, v), and N (u, v) are frequency domain characteristics of the second image gray value, the first image gray value, the point spread function, and the image noise, respectively.
In step S6, the average denoising process is performed on the multi-frame calibration image, after the processed calibration image is obtained, the calibration image noise N (x, y) of the calibration image is small, the frequency domain feature N (u, v) of the image noise is correspondingly small, the frequency domain feature N (u, v) of the image noise is ignored, and the fourier transform is performed on the point spread function P (u, v) in the frequency domain to obtain the fourier expression of the point spread functionAt this time, the second Fourier transform value of the gray value of the first picture is marked as +.>The first Fourier change value of the gray value of the second image is marked +.>Then there are:
fourier table of point spread functionReach typePerforming inverse Fourier transform to obtain a point spread function p (x, y) as follows:
illustratively, the calibration area includes a background area and a pattern distributed on the background area, the two-dimensional fourier transform power spectrum for the second pixel of the calibration area pattern meets the isotropy requirement, and the two-dimensional fourier transform power spectrum for the second pixel of the pattern in the image area corresponds to the isotropy requirement. Therefore, when Fourier transformation is carried out on the first image gray value of the calibration area and the second image gray value of the image area, and inverse Fourier transformation is carried out on the Fourier transformation result, the solving speed and the solving accuracy are improved. Namely, the two-dimensional Fourier transform power spectrum of the second pixel point of the calibration area pattern meets the isotropy requirement, the two-dimensional Fourier transform power spectrum of the second pixel point of the pattern in the image area meets the isotropy requirement, and the data processing speed and the accuracy of the obtained point spread function are improved.
The embodiment of the application also provides a system for realizing the image data processing method, and the system comprises at least one light source, a preset calibration picture and electronic equipment.
The light-emitting surface of the light source faces the preset calibration picture, and the included angle between the light-emitting surface and the preset calibration picture meets the preset angle requirement so as to generate a light beam which is projected to the preset calibration picture according to the preset angle;
the shooting face of the electronic equipment faces to a preset calibration picture, the electronic equipment is used for shooting the preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, the preset calibration picture is shot to obtain a calibration image reflecting the content of the preset calibration picture, the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values are distributed in each calibration area, and the pixel points in the calibration areas have corresponding first picture gray values; dividing the calibration image to obtain image areas for each calibration area respectively, wherein pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas are rectangular and correspond to the calibration areas one by one; and obtaining a point spread function aiming at the image area according to the second image gray value and the first image gray value, wherein the point spread function is used for deblurring the image to be processed so as to obtain the processed target image.
In this embodiment, a calibration image of the preset calibration image is obtained through the light source, the preset calibration image and the photographing device, and the point spread function is determined based on the preset calibration image and the calibration image.
Referring to fig. 12, the image data processing apparatus 1200 may include:
the first obtaining module 1201 is configured to take a preset calibration picture, obtain a calibration image reflecting the content of the preset calibration picture, and the preset calibration picture includes a plurality of rectangular calibration areas, wherein each calibration area has pixel point distribution with different gray values, and the pixel points in the calibration areas have corresponding first picture gray values;
a second obtaining module 1202, configured to segment the calibration image to obtain image areas for each calibration area, where pixel point distribution of different gray values exists in each image area, and pixel points in the image areas have corresponding second image gray values; the image areas correspond to the calibration areas one by one;
the third obtaining module 1203 is configured to obtain a processed target image according to the second image gray value and the first image gray value.
Optionally, the calibrating area of the preset calibrating picture includes: a background area, and a pattern distributed in the background area; in a preset calibration picture, the colors of the background areas of adjacent calibration areas are different; the image data processing apparatus 1200 may further include:
the first determining module is used for determining a first pixel point of a background area in the calibration area, wherein the first pixel point has a corresponding first gray value;
the second determining module is used for determining second pixel points of the patterns in the calibration area, the second pixel points have corresponding second gray values, the two-dimensional Fourier transform power spectrum of the second pixel points meets isotropy requirements, and the second gray values are different from the first gray values;
the first generation module is used for generating an electronic picture of a preset calibration picture according to the first pixel point and the second pixel point, and printing the electronic picture to obtain the preset calibration picture.
Optionally, the image area is rectangular, and the image data processing apparatus 1200 may further include:
the fourth acquisition module is used for acquiring picture corner points of a calibration area in a preset calibration picture and image corner points of an image area in a calibration image;
and the fifth acquisition module is used for determining a calibration area corresponding to the image area according to the picture corner and the image corner.
Optionally, the image data processing apparatus 1200 may further include:
a fourth determining module, configured to determine an initial image gray value of a pixel point in the calibration area;
a fifth determining module, configured to substitute the initial image gray value into a conversion relation between the image gray value and the corrected image gray value, to obtain a target corrected image gray value corresponding to the initial image gray value;
and the sixth determining module is used for taking the target corrected picture gray value as a first picture gray value aiming at the calibration area.
Optionally, the image data processing apparatus 1200 may further include:
the sixth acquisition module is used for shooting a gray card picture to obtain a gray card image of the gray card picture, the gray card picture comprises a plurality of gray card picture areas with different gray values of the gray card picture, the gray card image comprises a plurality of gray card image areas with different gray values of the gray card image, and the gray card image areas are in one-to-one correspondence with the gray card picture areas;
the second generation module is used for taking the gray level card image gray level value as the image gray level value, taking the gray level card image gray level value as the corrected image gray level value, and fitting to obtain a conversion relation between the image gray level value and the corrected image gray level value.
Optionally, the third obtaining module may include:
the first acquisition submodule is used for carrying out Fourier transform on the gray value of the second image to obtain a first Fourier transform value of the gray value of the second image;
the second acquisition submodule is used for carrying out Fourier transform on the gray value of the first picture to obtain a second Fourier transform value of the gray value of the first picture;
the third acquisition submodule is used for obtaining a Fourier expression of the point spread function according to the first Fourier transform value and the second Fourier transform value;
a fourth obtaining submodule, configured to perform inverse fourier transform on a fourier expression of the point spread function to obtain the point spread function;
and the processing module is used for performing deblurring processing on the image to be processed based on the point spread function to obtain a processed target image.
Optionally, the image data processing apparatus 1200 may further include:
a seventh acquisition module, configured to acquire an initial image gray value of a pixel point in the image area;
an eighth obtaining module, configured to obtain a first image by shooting in a preset scene, where the first image has a corresponding third image gray value;
a ninth obtaining module, configured to take a first picture, obtain a second image of the first picture, where a pixel point in the second image has a corresponding fourth image gray value; the third image gray value and the fourth image gray value are different;
And the tenth acquisition module is used for obtaining a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value and the fourth image gray value.
Optionally, the preset scene is a scene without a light source, and the first picture is a white picture; the tenth acquisition module includes:
a fifth obtaining sub-module, configured to obtain a first difference between the initial image gray value and the third image gray value, and a second difference between the fourth image gray value and the third image gray value;
a sixth obtaining submodule, configured to obtain a ratio between the first difference value and the second difference value;
the first determining submodule is used for determining a second image gray value of the pixel point in the image area according to the ratio.
Optionally, the image area is rectangular, and the second acquisition module includes:
a seventh obtaining sub-module, configured to obtain an image corner of the image area;
and the eighth determination submodule is used for dividing the calibration image according to the image corner points to obtain image areas corresponding to each calibration area respectively.
In the embodiment, the calibration image is obtained by shooting a preset calibration picture, the preset calibration picture does not have the phenomenon of pinhole diffraction, and the obtained calibration image can accurately reflect the content of the preset calibration picture. The calibration area is rectangular, and the rectangle has the characteristic of easily determining the contour line, so that the image area corresponding to the calibration area can be accurately and rapidly determined when the calibration image is segmented. The pixel points with different gray values exist in the calibration area respectively, the pixel points in the calibration area have corresponding first image gray values, the pixel points with different gray values exist in the image area are distributed, and the pixel points in the image area have corresponding second image gray values. The corresponding relation between the image area obtained by segmentation and the calibration area in the calibration image is accurate, the image area in the calibration image can reflect the content of the corresponding calibration area, and a high-quality target image can be obtained according to the second image gray value of the calibration image and the first image gray value of the calibration area, so that the problem of low image quality of the acquired image in the related technology is solved.
The image data processing device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image data processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image data processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 10, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 13, the embodiment of the present application further provides an electronic device 1300, including a processor 1301 and a memory 1302, where the memory 1302 stores a program or an instruction that can be executed on the processor 1301, and the program or the instruction implements each step of the embodiment of the image data processing method when executed by the processor 1301, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio frequency unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, and processor 1410.
Those skilled in the art will appreciate that the electronic device 1400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1410 by a power management system to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1410 is configured to take a preset calibration picture, obtain a calibration image reflecting the content of the preset calibration picture, where the preset calibration picture includes a plurality of rectangular calibration areas, each calibration area has pixel point distribution with different gray values, and the pixel points in the calibration area have corresponding first picture gray values;
dividing the calibration image to obtain image areas for each calibration area respectively, wherein pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas correspond to the calibration areas one by one;
and obtaining the processed target image according to the second image gray value and the first image gray value.
In summary, in the embodiment of the application, a calibration image is obtained by shooting a preset calibration picture, and the preset calibration picture does not have a pinhole diffraction phenomenon, so that the obtained calibration image can accurately reflect the content of the preset calibration picture. The calibration area and the image area are rectangular, and the rectangle has the characteristic of easily determining the contour line, so that the image area in the calibration image can be accurately and rapidly determined when the calibration image is segmented, and the calibration area corresponding to the image area can be accurately and rapidly determined. Based on the second image gray value of the image area and the first image gray value of the calibration area, an accurate point spread function can be obtained, and the problem of low accuracy of the point spread function obtained in the related technology is solved. In addition, compared with a method for determining a point spread function for the whole calibration image, the embodiment obtains the image area through segmentation and obtains the point spread function for the image area, thereby improving the accuracy of the point spread function.
Optionally, the calibrating area of the preset calibrating picture includes: a background area, and a pattern distributed in the background area; in a preset calibration picture, the colors of the background areas of adjacent calibration areas are different; the processor 1410 is further configured to determine a first pixel point for calibrating a background area in the area, where the first pixel point has a corresponding first gray value;
determining a second pixel point of the pattern in the calibration area, wherein the second pixel point has a corresponding second gray value, and the two-dimensional Fourier transform power spectrum of the second pixel point meets isotropy requirements; the second gray value is different from the first gray value;
and generating an electronic picture of a preset calibration picture according to the first pixel point and the second pixel point, and printing the electronic picture to obtain the preset calibration picture.
Optionally, the image area is rectangular, and the processor 1410 is further configured to obtain, before dividing the calibration image to obtain rectangular image areas for each calibration area, a picture corner of the calibration area in the preset calibration picture, and an image corner of the image area in the calibration image; and determining a calibration area corresponding to the image area according to the picture corner and the image corner.
Optionally, the processor 1410 is further configured to determine an initial picture gray value of a pixel point of the calibration area before obtaining a point spread function for the image area according to the second image gray value and the first picture gray value;
Substituting the initial picture gray value into a conversion relation between the picture gray value and the corrected picture gray value to obtain a target corrected picture gray value corresponding to the initial picture gray value;
and taking the target corrected picture gray value as a first picture gray value aiming at the calibration area.
Optionally, the processor 1410 is further configured to, before determining, according to the initial image gray value and a conversion relation between the image gray value and the corrected image gray value, a target corrected image gray value corresponding to the initial image gray value, shoot a gray card image to obtain a gray card image of the gray card image, where the gray card image includes a plurality of gray card image areas with different gray card image gray values, and the gray card image areas are in one-to-one correspondence;
and taking the gray scale card image gray scale value as the image gray scale value, and taking the gray scale card image gray scale value as the corrected image gray scale value, and fitting to obtain a conversion relation between the image gray scale value and the corrected image gray scale value.
Optionally, the processor 1410 is further configured to perform fourier transform on the second image gray value to obtain a first fourier transform value of the second image gray value;
Performing Fourier transform on the first picture gray value to obtain a second Fourier transform value of the first picture gray value;
obtaining a Fourier expression of the point spread function according to the first Fourier transform value and the second Fourier transform value;
performing inverse Fourier transform on the Fourier expression of the point spread function to obtain the point spread function;
and performing deblurring treatment on the image to be treated based on the point spread function to obtain a treated target image.
Optionally, the processor 1410 is further configured to obtain an initial image gray value of a pixel point in the image area before obtaining a point spread function for the image area according to the second image gray value and the first image gray value;
shooting under a preset scene to obtain a first image, wherein the first image has a corresponding third image gray value;
shooting a first picture to obtain a second image of the first picture, wherein pixel points in the second image have corresponding fourth image gray values;
the third image gray value and the fourth image gray value are different;
and obtaining a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value and the fourth image gray value.
Optionally, the preset scene is a scene without a light source, and the first picture is a white picture; the processor 1410 is further configured to obtain a first difference between the initial image gray value and the third image gray value, and a second difference between the fourth image gray value and the third image gray value;
acquiring a ratio between the first difference value and the second difference value;
and determining a second image gray value of the pixel point in the image area according to the ratio.
Optionally, the image area is rectangular, and the processor 1410 is further configured to obtain an image corner of the image area;
dividing the calibration image according to the image corner points to obtain image areas corresponding to each calibration area respectively.
In the embodiment, the calibration image is obtained by shooting a preset calibration picture, the preset calibration picture does not have the phenomenon of pinhole diffraction, and the obtained calibration image can accurately reflect the content of the preset calibration picture. The calibration area is rectangular, and the rectangle has the characteristic of easily determining the contour line, so that the image area corresponding to the calibration area can be accurately and rapidly determined when the calibration image is segmented. The pixel points with different gray values exist in the calibration area respectively, the pixel points in the calibration area have corresponding first image gray values, the pixel points with different gray values exist in the image area are distributed, and the pixel points in the image area have corresponding second image gray values. The corresponding relation between the image area obtained by segmentation and the calibration area in the calibration image is accurate, the image area in the calibration image can reflect the content of the corresponding calibration area, and a high-quality target image can be obtained according to the second image gray value of the calibration image and the first image gray value of the calibration area, so that the problem of low image quality of the acquired image in the related technology is solved.
It should be appreciated that in embodiments of the present application, the input unit 1404 may include a graphics processor (Graphics Processing Unit, GPU) 14041 and a microphone 14042, with the graphics processor 14041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1407 includes at least one of a touch panel 14071 and other input devices 14072. The touch panel 14071 is also referred to as a touch screen. The touch panel 14071 may include two parts, a touch detection device and a touch controller. Other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 1409 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1409 may include volatile memory or nonvolatile memory, or the memory 1409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1409 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1410 may include one or more processing units; optionally, the processor 1410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the image data processing method when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the image data processing method embodiment can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image data processing method, and achieve the same technical effects, and are not described herein in detail for avoiding repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (20)

1. An image data processing method, comprising:
shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, wherein the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values exist in each calibration area, and the pixel points in the calibration areas have corresponding first picture gray values;
dividing the calibration image to obtain image areas respectively aiming at each calibration area, wherein pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas and the calibration areas are in one-to-one correspondence;
and obtaining the processed target image according to the second image gray value and the first image gray value.
2. The method of claim 1, wherein the calibration area of the preset calibration picture comprises: a background region, and a pattern distributed in the background region; in the preset calibration picture, the colors of the background areas of the adjacent calibration areas are different;
before shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, the method further comprises the following steps:
Determining a first pixel point of a background area in the calibration area, wherein the first pixel point has a corresponding first gray value;
determining a second pixel point of the pattern in the calibration area, wherein the second pixel point has a corresponding second gray value, the two-dimensional Fourier transform power spectrum of the second pixel point meets isotropy requirement, and the second gray value is different from the first gray value;
and generating an electronic picture of the preset calibration picture according to the first pixel point and the second pixel point, and printing the electronic picture to obtain the preset calibration picture.
3. The method according to claim 1, wherein the image areas are rectangular, comprising, before segmenting the calibration image to obtain image areas for each calibration area, respectively:
acquiring picture corner points of a calibration area in the preset calibration picture and image corner points of an image area in the calibration image;
and determining a calibration area corresponding to the image area according to the picture corner and the image corner.
4. The method of claim 1, further comprising, prior to deriving a processed target image from the second image gray value and the first picture gray value:
Determining an initial picture gray value of a pixel point of the calibration area;
substituting the initial picture gray value into a conversion relation between the picture gray value and the corrected picture gray value to obtain a target corrected picture gray value corresponding to the initial picture gray value;
and taking the target corrected picture gray value as a first picture gray value aiming at the calibration area.
5. The method of claim 4, wherein prior to substituting the initial picture gray value into a conversion relation between a picture gray value and a corrected picture gray value to obtain a target corrected picture gray value corresponding to the initial picture gray value, further comprising:
taking a gray card picture to obtain a gray card image of the gray card picture, wherein the gray card picture comprises a plurality of gray card picture areas with different gray values of the gray card picture, the gray card image comprises a plurality of gray card image areas with different gray values of the gray card image, and the gray card image areas and the gray card picture areas are in one-to-one correspondence;
and taking the gray-scale card image gray-scale value as the image gray-scale value, and taking the gray-scale card image gray-scale value as the corrected image gray-scale value, and fitting to obtain a conversion relation between the image gray-scale value and the corrected image gray-scale value.
6. The method according to claim 1, wherein the obtaining the processed target image according to the second image gray value and the first picture gray value includes:
performing Fourier transform on the second image gray value to obtain a first Fourier transform value of the second image gray value;
performing Fourier transform on the first picture gray value to obtain a second Fourier transform value of the first picture gray value;
obtaining a Fourier expression of a point spread function according to the first Fourier transform value and the second Fourier transform value;
performing inverse Fourier transform on the Fourier expression of the point spread function to obtain the point spread function;
and performing deblurring treatment on the image to be treated based on the point spread function to obtain a treated target image.
7. The method of claim 1, comprising, prior to deriving a processed target image from the second image gray value and the first picture gray value:
acquiring an initial image gray value of a pixel point in the image area;
shooting under a preset scene to obtain a first image, wherein the first image has a corresponding third image gray value;
Shooting a first picture to obtain a second image of the first picture, wherein pixel points in the second image have corresponding fourth image gray values; the third image gray value and the fourth image gray value are different;
and obtaining a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value and the fourth image gray value.
8. The method of claim 7, wherein the preset scene is a scene without a light source and the first picture is a white picture;
the obtaining a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value and the fourth image gray value includes:
acquiring a first difference value between the initial image gray value and the third image gray value and a second difference value between the fourth image gray value and the third image gray value;
acquiring a ratio between the first difference value and the second difference value;
and determining a second image gray value of the pixel point in the image area according to the ratio.
9. The method according to claim 1, wherein the image area is rectangular, and the segmenting the calibration image to obtain the image area for each calibration area respectively comprises:
Acquiring image corner points of the image area;
dividing the calibration image according to the image corner points to obtain image areas corresponding to each calibration area respectively.
10. An image data processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for shooting a preset calibration picture to obtain a calibration image reflecting the content of the preset calibration picture, the preset calibration picture comprises a plurality of rectangular calibration areas, pixel points with different gray values exist in each calibration area and are distributed, and the pixel points in the calibration areas have corresponding first picture gray values;
the second acquisition module is used for dividing the calibration image to obtain image areas respectively aiming at each calibration area, pixel point distribution with different gray values exists in each image area, and the pixel points in the image areas have corresponding second image gray values; the image areas and the calibration areas are in one-to-one correspondence;
and the third acquisition module is used for acquiring the processed target image according to the second image gray value and the first image gray value.
11. The apparatus of claim 10, wherein the calibration area of the preset calibration picture comprises: a background region, and a pattern distributed in the background region; in the preset calibration picture, the colors of the background areas of the adjacent calibration areas are different; the apparatus further comprises:
The first determining module is used for determining a first pixel point of a background area in the calibration area, and the first pixel point is provided with a corresponding first gray value;
the second determining module is used for determining a second pixel point of the pattern in the calibration area, the second pixel point is provided with a corresponding second gray value, the two-dimensional Fourier transform power spectrum of the second pixel point meets isotropy requirements, and the second gray value is different from the first gray value;
the first generation module is used for generating an electronic picture of the preset calibration picture according to the first pixel point and the second pixel point, and printing the electronic picture to obtain the preset calibration picture.
12. The apparatus of claim 10, wherein the image area is rectangular, the apparatus further comprising:
a fourth obtaining module, configured to obtain a picture corner of a calibration area in the preset calibration picture and an image corner of an image area in the calibration image;
and a fifth acquisition module, configured to determine a calibration area corresponding to the image area according to the picture corner and the image corner.
13. The apparatus of claim 10, wherein the apparatus further comprises:
A fourth determining module, configured to determine an initial image gray value of a pixel point of the calibration area;
a fifth determining module, configured to substitute the initial picture gray value into a conversion relation between a picture gray value and a corrected picture gray value, to obtain a target corrected picture gray value corresponding to the initial picture gray value;
and a sixth determining module, configured to take the target corrected picture gray value as a first picture gray value for the calibration area.
14. The apparatus of claim 13, wherein the apparatus further comprises:
a sixth obtaining module, configured to obtain a gray card image of the gray card image, where the gray card image includes a plurality of gray card image areas with different gray values of the gray card image, and the gray card image areas are in one-to-one correspondence with the gray card image areas;
the second generation module is used for taking the gray-scale card image gray-scale value as the image gray-scale value, taking the gray-scale card image gray-scale value as the corrected image gray-scale value, and fitting to obtain a conversion relation between the image gray-scale value and the corrected image gray-scale value.
15. The apparatus of claim 10, wherein the third acquisition module comprises:
the first acquisition submodule is used for carrying out Fourier transform on the second image gray value to obtain a first Fourier transform value of the second image gray value;
the second acquisition submodule is used for carrying out Fourier transform on the first picture gray value to obtain a second Fourier transform value of the first picture gray value;
a third obtaining submodule, configured to obtain a fourier expression of a point spread function according to the first fourier transform value and the second fourier transform value;
a fourth obtaining submodule, configured to perform inverse fourier transform on the fourier expression of the point spread function, to obtain the point spread function;
and the processing module is used for performing deblurring processing on the image to be processed based on the point spread function to obtain a processed target image.
16. The apparatus of claim 10, the apparatus further comprising:
a seventh obtaining module, configured to obtain an initial image gray value of a pixel point in the image area;
an eighth obtaining module, configured to obtain a first image by shooting in a preset scene, where the first image has a corresponding third image gray value;
A ninth obtaining module, configured to take a first picture, obtain a second image of the first picture, where a pixel point in the second image has a corresponding fourth image gray value; the third image gray value and the fourth image gray value are different;
and a tenth acquisition module, configured to obtain a second image gray value of the pixel point in the image area according to the initial image gray value, the third image gray value, and the fourth image gray value.
17. The apparatus of claim 16, wherein the preset scene is a scene without a light source and the first picture is a white picture; the tenth acquisition module includes:
a fifth obtaining sub-module, configured to obtain a first difference between the initial image gray value and the third image gray value, and a second difference between the fourth image gray value and the third image gray value;
a sixth obtaining submodule, configured to obtain a ratio between the first difference value and the second difference value;
and the first determining submodule is used for determining a second image gray value of the pixel point in the image area according to the ratio.
18. The apparatus of claim 10, wherein the image area is rectangular, the second acquisition module comprising:
A seventh obtaining sub-module, configured to obtain an image corner of the image area;
and the eighth determining submodule is used for dividing the calibration image according to the image corner points to obtain image areas corresponding to each calibration area respectively.
19. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image data processing method of any one of claims 1 to 9.
20. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image data processing method according to any of claims 1 to 9.
CN202311581285.8A 2023-11-23 2023-11-23 Image data processing method, device, electronic equipment and readable storage medium Pending CN117611488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311581285.8A CN117611488A (en) 2023-11-23 2023-11-23 Image data processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311581285.8A CN117611488A (en) 2023-11-23 2023-11-23 Image data processing method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117611488A true CN117611488A (en) 2024-02-27

Family

ID=89947513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311581285.8A Pending CN117611488A (en) 2023-11-23 2023-11-23 Image data processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117611488A (en)

Similar Documents

Publication Publication Date Title
CN113365041B (en) Projection correction method, projection correction device, storage medium and electronic equipment
CN111311523B (en) Image processing method, device and system and electronic equipment
CN109587556B (en) Video processing method, video playing method, device, equipment and storage medium
CN112258579B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111563552B (en) Image fusion method, related device and apparatus
WO2020010945A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US8699820B2 (en) Image processing apparatus, camera apparatus, image processing method, and program
Kordecki et al. Practical vignetting correction method for digital camera with measurement of surface luminance distribution
WO2023134103A1 (en) Image fusion method, device, and storage medium
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
US10621752B2 (en) Methods and systems for camera calibration
CN113963072B (en) Binocular camera calibration method and device, computer equipment and storage medium
CN114615480A (en) Projection picture adjusting method, projection picture adjusting device, projection picture adjusting apparatus, storage medium, and program product
CN114155285B (en) Image registration method based on gray histogram
CN115100037A (en) Large-breadth tile imaging method and system based on multi-line scanning camera image splicing
CN116051652A (en) Parameter calibration method, electronic equipment and storage medium
CN112037128A (en) Panoramic video splicing method
CN117611488A (en) Image data processing method, device, electronic equipment and readable storage medium
CN113592753B (en) Method and device for processing image shot by industrial camera and computer equipment
CN111428707B (en) Method and device for identifying pattern identification code, storage medium and electronic equipment
CN106817542B (en) imaging method and imaging device of microlens array
Zhan et al. PSF estimation method of simple-lens camera using normal sinh-arcsinh model based on noise image pairs
KR102598910B1 (en) Method, system, and device for detecting an object in a distored image
CN113902644A (en) Image processing method, device, equipment and storage medium
CN113873223B (en) Method, device, equipment and storage medium for determining definition of camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination