CN111510691B - Color interpolation method and device, equipment and storage medium - Google Patents

Color interpolation method and device, equipment and storage medium Download PDF

Info

Publication number
CN111510691B
CN111510691B CN202010305516.2A CN202010305516A CN111510691B CN 111510691 B CN111510691 B CN 111510691B CN 202010305516 A CN202010305516 A CN 202010305516A CN 111510691 B CN111510691 B CN 111510691B
Authority
CN
China
Prior art keywords
image
frames
determining
images
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010305516.2A
Other languages
Chinese (zh)
Other versions
CN111510691A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010305516.2A priority Critical patent/CN111510691B/en
Publication of CN111510691A publication Critical patent/CN111510691A/en
Application granted granted Critical
Publication of CN111510691B publication Critical patent/CN111510691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The embodiment of the application discloses a color interpolation method, which comprises the following steps: acquiring at least two frames of original images to be processed; converting each frame of original image of the at least two frames of original images into corresponding gray level images to obtain at least two frames of gray level images; determining a reference frame image from the at least two frames of gray level images; and performing weighted interpolation on at least two frames of aligned original images according to the gradient characteristics of the reference frame images to obtain a target image. The embodiment of the application also provides a color interpolation device, equipment and a storage medium.

Description

Color interpolation method and device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technology, and relates to, but is not limited to, a color interpolation method and apparatus, a device, and a storage medium.
Background
In the traditional processing method for demosaicing images, the judgment direction is not accurate enough when the gradient is calculated by relying on a Color Filter Array (CFA) image with missing information, so that the edge zipper effect and the pseudo-Color phenomenon caused by the method are difficult to solve; meanwhile, details in the restored original image are not real enough, and the phenomenon of over-fitting of the details is easy to occur.
Disclosure of Invention
In view of this, embodiments of the present application provide a color interpolation method, a color interpolation device, a color interpolation apparatus, and a color interpolation storage medium to solve at least one problem in the prior art, which can make the edge judgment more accurate by using multi-frame images, and are beneficial to solving the edge zipper effect and the pseudo color phenomenon, and restore more real details when recovering the demosaic image by using more information provided by the multi-frame images.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a color interpolation method, where the method includes:
acquiring at least two frames of original images to be processed;
converting each frame of original image of the at least two frames of original images into corresponding gray level images to obtain at least two frames of gray level images;
determining a reference frame image from the at least two frames of gray level images;
and performing weighted interpolation on at least two frames of aligned original images according to the gradient characteristics of the reference frame images to obtain a target image.
In a second aspect, an embodiment of the present application provides a color interpolation apparatus, which includes an obtaining module, a converting module, a first determining module, and an interpolating module, where:
the acquisition module is used for acquiring at least two frames of original images to be processed;
the conversion module is used for converting each frame of original image of the at least two frames of original images into a corresponding gray image to obtain at least two frames of gray images;
the first determining module is used for determining a reference frame image from the at least two frames of gray level images;
and the interpolation module is used for performing weighted interpolation on at least two aligned original frames of images according to the gradient characteristics of the reference frame of images to obtain a target image.
In a third aspect, an embodiment of the present application provides a color interpolation apparatus, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps in the color interpolation method when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the color interpolation method described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the embodiment of the application, a plurality of frames of original images are converted into corresponding gray level images, the reference frame images are determined from the plurality of frames of gray level images, and the gradient characteristics of the reference frame images are used for carrying out weighted interpolation on the plurality of frames of original images, so that the edge is judged more accurately, the edge zipper effect and the pseudo-color phenomenon are avoided, and more information provided by the plurality of frames of images is used for restoring more real details when the original images are restored.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
fig. 1 is a schematic diagram of a sampling of a bayer pattern pixel provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an edge zipper effect and pseudo-color phenomenon provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of multi-frame sub-pixel shift provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a comparison between a single CFA image and a multi-frame CFA image according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of an alternative color interpolation method provided in the embodiments of the present application;
FIG. 6 is a schematic flow chart of an alternative color interpolation method provided in the embodiments of the present application;
FIG. 7 is a schematic flow chart of an alternative color interpolation method provided in the embodiments of the present application;
FIG. 8 is a schematic flow chart of an alternative color interpolation method provided in the embodiments of the present application;
fig. 9A is a data flow diagram of a multi-frame CFA image demosaicing method according to an embodiment of the present disclosure;
FIG. 9B is a logic flow diagram of a multi-frame CFA image demosaicing method according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating a flow of estimating a homography matrix according to an embodiment of the present application;
FIG. 11 is a Sobel operator for calculating local gradients Ix and Iy according to an embodiment of the present disclosure;
fig. 12 shows different image structures such as edges, corners, smooth regions, etc. provided in the embodiment of the present application;
fig. 13 is a schematic structural diagram of a color interpolation apparatus according to an embodiment of the present application;
fig. 14 is a hardware entity diagram of a color interpolation device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Before describing the color interpolation method provided by the embodiment of the present application in detail, the terms and related technologies related to the embodiment of the present application will be briefly described.
The sensing principle of the sensor is to sample and quantify light by one photosite, but each photosite can only sense one color of red, green and blue (RGB) in the sensor. Therefore, the 30 ten thousand pixels or 130 ten thousand pixels, etc. are commonly referred to as having 30 ten thousand or 130 ten thousand photosites. Each photosite can only sense one color. However, to restore a true image, each dot needs to have three colors of RGB.
Single sensor color imaging is widely used in the digital camera industry. In a single sensor digital camera, the sensor surface is covered with a CFA, and there are many typical Color filters (Color filters) available, such as: red, green, blue, magenta filters (RGBE Filter), cytochrome filters (CYGM Filter), Bayer filters (Bayer Filter), etc., among which Bayer filters are more commonly used in the industry.
A bayer filter is a mosaic color filter array formed by arranging RED, GREEN, BLUE (RGB) color filters on a square of photo-sensing elements. Single-chip digital image sensors used in digital cameras, video recorders, scanners, etc. mostly use color filter arrays with such specific arrangements to produce color images. Such an arrangement of filters is also called RGBG, GRGB or RGGB because 50% is green, 25% is red and the other 25% is blue.
Fig. 1 is a schematic diagram of sampling pixels in a bayer format according to an embodiment of the present disclosure, where the bayer format refers to an original picture inside a camera and is generally given by the suffix of raw. As shown in fig. 1, the arrangement format of the original data is RGGB arrangement, each pixel includes only a part of the spectrum, and the RGB value of each pixel must be realized by interpolation. That is, each pixel only samples one of the three components of red, green or blue, and to recover the full-color image, estimation is needed for the other two missing color components, and this estimation process is called demosaicing (demosaicing).
To date, many demosaicing algorithms have been proposed. Some simple ways are to interpolate pixel values of adjacent same colors. For example, after the chip is exposed to obtain an image, each pixel can be read out. The pixel point of the green filter accurately measures the green component, and the red component and the blue component of the pixel point are obtained from the adjacent region. The red component of one green pixel point can be calculated by interpolation of two adjacent red pixel points; similarly, the blue component can also be calculated by interpolating two adjacent blue pixel points.
This simple method works well with constant or uniform color changes, but produces noise, such as color bleeding, at abrupt changes in color and brightness, particularly at sharp corners. Therefore, other demosaicing methods attempt to identify high contrast edges and then only interpolate along these edges, without crossing the edges.
Other algorithms assume that the color of an area in an image is relatively constant, even if the illumination is different, and the color channels are highly correlated. Thus, the green component is interpolated first, then the red component, then the blue component, so the red-green-to-blue-green color ratio is constant. Still other methods have different settings for the image content and attempt to calculate the value of the missing color component.
The demosaicing process is an image interpolation process, and the existing simple interpolation methods such as bilinear interpolation, bicubic interpolation and the like have better effect in an image smooth area. But they are equivalent to isotropic low-pass filtering, which reduces the image resolution and makes it difficult to recover the details of the original image.
Hamilton et al propose an adaptive color plane interpolation algorithm, detect the directions of local horizontal and vertical edges by using a second derivative, and then interpolate along the horizontal and vertical edge directions respectively, thereby improving the results of simple interpolation methods such as bilinear interpolation. The algorithm uses the correlation between color channels and calculates the green component using the second order gradient of the red and blue components as a correction factor. And after the green component is recovered, the second-order gradient of the green component is used as a correction factor to recover the red component and the blue component.
Because the color channels have correlation, if a certain region of the green channel has edges, the probability that the red channel and the blue channel in the same region have edges is extremely high, the difference image of the 2 channels is smoother, and the interpolation effect is better. Based on the observation, Perkkucuksen provides a gradient-based non-threshold interpolation algorithm, after the green plane direction is interpolated by using an interpolation formula of Hamilton, the color difference in the horizontal direction and the vertical direction is calculated, the color differences are mixed from the upper direction, the lower direction, the left direction and the right direction to obtain the final color difference estimation, and the color difference estimation is added with the red component or the blue component in the CFA to obtain an interpolation result.
A CFA image demosaicing joint denoising method based on a generation countermeasure network (GAN) is provided in the related technology, and aims to reduce artifacts in a restored image of the mosaic joint denoising method and reduce the distortion degree of the image.
In another CFA image demosaicing method based on directional weighted interpolation, weighted interpolation is carried out according to the local gradient of a CFA image, so that the strong contrast after edge interpolation is effectively protected, zipper artifacts and false color artifacts changing along abrupt colors can be reduced, but noise and details are difficult to distinguish by a single-frame CFA image, and therefore, the expressive force of the details can be sacrificed in order to reduce the noise of an output demosaiced image.
In summary, the existing demosaicing technologies basically belong to the traditional single-frame method, on one hand, although the method based on interpolation (including directional interpolation, gradient-oriented interpolation, etc.) can mitigate strong boundary blurring to some extent, the method depends on a single-frame CFA image lacking information to calculate the gradient judgment direction, which is not accurate enough, so that the edge zipper effect (zipper) and the false color phenomenon (false color) brought by the method are difficult to solve, and on the other hand, some methods based on neural networks always build a data set by adopting any structure based on a single frame, so that the recovered details are not true enough, and the phenomenon of over-fitting of the details is easy to occur.
Fig. 2 is a schematic diagram of an edge zipper effect and a pseudo color phenomenon provided in an embodiment of the present application, as shown in fig. 2, a leftmost side in the diagram is an original demosaiced picture, and an intermediate picture and a rightmost side are enlarged pictures of a fence portion in the original demosaiced picture, where the intermediate portion indicates that a color stripe, which is not present in the original picture, appears, i.e., the pseudo color phenomenon, and the rightmost side indicates that a sawtooth shape, in which a large amount of color is distorted, appears at an edge of the fence, i.e., the edge zipper effect.
Fig. 3 is a schematic diagram of multi-frame sub-pixel displacement provided in this embodiment, as shown in fig. 3, a left side of a first row indicates a frame of image normally captured by a sensor, a right side of the row indicates that each pixel is decomposed into a red component, a green component, and a blue component according to a color channel, a left side of a second row indicates a frame of image captured after moving the sensor to the right by 1 pixel position, a right side of the row indicates an effect obtained by decomposing each pixel according to a color channel and then superimposing each color component decomposed by the first row, and so on, a third row and a fourth row respectively indicate that the sensor is moved downward by 1 pixel on the basis of the previous row, moved downward and moved rightward by 1 pixel, and superimposed with the previous row after being decomposed according to a color channel, so that by capturing 4 frames of images and superimposing, and each pixel point can acquire the color component of a single channel.
In the field of Super Resolution reconstruction (Super Resolution reconstruction), multi-frame shot input has shown significant advantages because of the presence of sub-pixel displacement (sub-pixel displacement), which contains higher Resolution information. On the aspect of demosaicing, more real color information on a missing position can be obtained by using sub-pixel displacement between frames, and meanwhile, image information with higher resolution is brought by the sub-pixel displacement, so that the reality of demosaicing reduction details can be further improved, the edge can be judged more accurately, and the edge zipper effect and the pseudo-color phenomenon can be solved.
Fig. 4 is a schematic diagram illustrating a comparison between a single frame CFA image and a multi-frame CFA image provided in an embodiment of the present application, as shown in fig. 4, a black dot indicates a color component missing at a pixel position, for example, a green component pointed by a dashed arrow, which needs to be obtained by interpolating the green component of a sampling point pixel pointed by a surrounding solid arrow, for the single frame CFA image on the left side of fig. 4, only the green component at a surrounding fixed pixel position can be used for interpolation, and for the multi-frame CFA image on the right side of fig. 4, there may be more sub-pixel positions, and the green component of the sampling point pixel pointed by the solid arrow is used for interpolating the green component of the pixel point to be interpolated, which is more accurate in estimating edges and details.
By means of multi-frame averaging, effective time domain denoising can be performed, and the noise of the image is greatly reduced while details are kept. Meanwhile, the interpolation kernel is estimated by using a control kernel regression method, different image structures such as edges, angular points, smooth areas and the like are judged by effectively using multi-frame information, and the zipper effect and the pseudo-color phenomenon can be reduced.
Fig. 5 is an optional flowchart of the color interpolation method according to the embodiment of the present application, and as shown in fig. 5, the method includes:
step S510, at least two frames of original images to be processed are obtained.
Here, the at least two original images are multi-frame CFA images having sub-pixel displacement with respect to the same scene.
It should be noted that, for the same target object, the method for obtaining at least two frames of images with known sub-pixel displacement is mostly based on hardware technology, for example: 1) a Charge Coupled Device (CCD) dislocation imaging technology is utilized to stagger and arrange a plurality of linear array or area array CCDs on the same focal plane for simultaneous imaging, so as to obtain a sub-pixel displacement image; 2) using a light splitting method, adding a light splitting prism between a high-resolution optical system and a receiver to obtain at least two frames of sub-pixel displacement images of the same target object; 3) and (3) performing overall two-dimensional displacement on a single CCD camera by using a high-precision two-dimensional translation stage, and imaging a target with a certain object distance to obtain at least two frames of sub-pixel displacement images. The embodiment of the present application does not limit the manner of obtaining at least two frames of original images to be processed.
Step S520, converting each frame of original image of the at least two frames of original images into a corresponding gray image, so as to obtain at least two frames of gray images.
Here, the original image of each frame is a single-frame CFA image, and includes four channels of red, green, and blue (RGGB).
Here, the corresponding gray-scale image is a single-channel gray-scale image corresponding to a single-frame CFA image, and is an image represented by a single pixel point with 8-bit gray-scale values (0 to 255). For example, a single channel gray map of 500 × 500 pixels in a frame is composed of 500 × 500 pixels and 250000 pixels with different gray levels.
It should be noted that the gradient of the CFA image does not have a difference between channels, and four channels need to be combined into one channel to perform gradient calculation, so as to estimate an interpolation kernel of a position to be interpolated according to the gradient of the image. And converting the CFA image into a single-channel gray image for estimation of an alignment and interpolation kernel of at least two frames of original images.
In some embodiments, the at least two frames of grayscale images may be obtained by: determining the gray value of each pixel position according to the corresponding pixel value of the color channel of each pixel position on each frame of CFA image; and converting each frame of CFA image into a corresponding gray image according to the gray value of each pixel position to obtain the at least two frames of gray images.
Step S530, determining a reference frame image from the at least two frames of grayscale images.
Here, the at least two frames of gray images represent multi-frame gray images, one frame of image with the highest resolution, the brightest brightness or the highest sharpness value may be selected from the multi-frame gray images as a reference frame image, and the multi-frame gray images may be further aligned according to the reference frame image.
In some possible implementation manners, a sharpness estimation of the image can be performed by using a sharpness reference model based on local brightness characteristics, and one frame of image with the largest sharpness value is selected as a reference frame image from at least two frames of gray images; wherein the sharpness value characterizes the sharpness of the image and the sharpness of the edges of the image.
And S540, performing weighted interpolation on at least two frames of aligned original images according to the gradient characteristics of the reference frame images to obtain a target image.
Estimating an interpolation kernel of a pixel to be interpolated according to the gradient feature of the pixel to be interpolated in a reference frame image, and performing convolution operation on the interpolation kernel and at least two aligned original frames of images to obtain a final target image; wherein the target image is a demosaiced RGB image.
Here, the gradient features are a horizontal direction gradient and a vertical direction gradient of a pixel to be interpolated in the reference frame image. The image can be seen as a two-dimensional discrete function, the gradient of which is in fact the derivative of this two-dimensional discrete function. Image edges are typically realized by performing gradient operations on the image.
One possible implementation way is that the horizontal local gradient Ix and the vertical local gradient Iy of the pixel to be interpolated are respectively calculated by a Sobel operator for the selected reference frame image and are used as the estimation of the image gradient
In another possible implementation, the gradient calculation may also use Robert, Prewitt, Lapacian operator, or other operators capable of indicating the degree of change in the image.
In the embodiment of the application, a plurality of frames of original images are converted into corresponding gray level images, the reference frame images are determined from the plurality of frames of gray level images, and the gradient characteristics of the reference frame images are used for carrying out weighted interpolation on the plurality of frames of original images, so that the edge is judged more accurately, the edge zipper effect and the pseudo-color phenomenon are avoided, and more information provided by the plurality of frames of images is used for restoring more real details when the original images are restored.
Fig. 6 is an optional flowchart of the color interpolation method according to an embodiment of the present application, and as shown in fig. 6, the step S530 "determining the reference frame image from the at least two frame gray scale images" may be implemented by:
step S610, determining a brightness characteristic of each frame of gray scale image in the at least two frames of gray scale images.
Here, the brightness feature is a local brightness value of each frame of the gray scale image, and may be a brightness value of each pixel point in each frame of the gray scale image.
Step S620, determining a sharpness value of each frame of the gray image according to the brightness feature.
Here, the sharpness value represents the sharpness of an image and the sharpness of an image edge.
Here, the sharpness estimation of the image may be performed by using a sharpness reference model based on local brightness characteristics, and the sharpness value of each frame of the gray image may be obtained.
In practice, the sharpness value of each frame of the gray image may be determined by:
step 6201, dividing each frame of gray image into a plurality of area images.
Here, each frame of gray scale image is divided into a plurality of area images, such as dividing the whole frame of gray scale image into k1*k2And each window is corresponding to the gray level image to form an area image. Any conventional technique capable of segmenting an image may be used, and the embodiment of the present application is not limited thereto.
In step S6202, a maximum luminance value and a minimum luminance value of each of the plurality of area images are determined.
Here, the maximum luminance value and the minimum luminance value are obtained for each region image, and a plurality of sets of the maximum luminance value and the minimum luminance value are obtained for a plurality of region images.
And S6203, determining a sharpness value of each frame of gray level image according to the maximum brightness value and the minimum brightness value.
Here, the maximum brightness value and the minimum brightness value of all the area images of each frame of gray level image are input into the sharpness reference model, and the sharpness value of each frame of gray level image is calculated, wherein the sharpness value represents the sharpness of the image and the sharpness of the edge of the image.
Step S630, using the frame image with the maximum sharpness value in each frame of gray scale image as the reference frame image.
Here, a frame image with the clearest image and the sharpest image edge is selected from the multi-frame gray scale image as a reference frame image, so that the interpolation weight estimation is performed for the reference frame image next.
Fig. 7 is an optional flowchart of the color interpolation method provided in the embodiment of the present application, and as shown in fig. 7, the method includes the following steps:
step S710, at least two frames of original images to be processed are obtained.
Step S720, converting each frame of original image of the at least two frames of original images into a corresponding gray image, so as to obtain at least two frames of gray images.
Step S730, determining a reference frame image from the at least two frames of gray images.
Here, the steps S710 to S730 are similar to the steps S510 to S530 in fig. 1, and are not repeated herein to avoid repetition.
Step S740, determining the sub-pixel displacement from the k frame image to the reference frame image.
Here, the k-th frame image is a gray image of each frame other than the reference frame image among the at least two frame gray images.
Here, the sub-pixel displacement includes a displacement of each point in the k frame image with respect to the reference frame image; the sub-pixels are units which are smaller than the pixels and are obtained by subdividing the basic unit of the pixels, so that the image resolution is improved.
As a possible implementation, the sub-pixel displacement of the k frame image to the reference frame image may be determined by:
step S7401, respectively performing feature point detection on the at least two frames of grayscale images to obtain a first feature point set of the reference frame image and a second feature point set of the k-th frame image.
Here, feature point detection is performed on each frame image of the at least two frames of grayscale images, and a feature point set corresponding to each frame image is obtained, such as a first feature point set of a reference frame image and a second feature point set of a k-th frame image.
In the image processing, the feature points refer to points where the gray value of an image changes drastically or points where the curvature of an edge of the image is large (i.e., an intersection of two edges), and are stable feature points that are not transformed by factors such as light, affine transformation, and noise, for example.
As a possible implementation manner, a Scale Invariant Feature Transform (SIFT) algorithm, an accelerated Up Robust Features Transform (SURF) algorithm, a corner point or other Features may be used for Feature point detection and description.
Step S7402, performing feature point matching on the first feature point set and the second feature point set to obtain N groups of feature point pairs closest to each other in the reference frame image and the k-th frame image.
Here, N is an integer greater than or equal to 4.
Here, the feature point matching process is to find 4 or more feature point pairs closest to each other in the two frames of images of the reference frame image and the k-th frame image by euclidean distance among the two sets of feature points found above.
Step S7403, determining a homography matrix from the k-th frame image to the reference frame image according to the coordinates of the N sets of feature point pairs.
Here, a matrix having 2N rows may be determined as coefficients from the coordinates of the N sets of pairs of feature points, a feature point conversion equation from the k-th frame image to the reference frame image may be constructed, and a final homography matrix may be obtained by solving the coefficients of the homography matrix.
Step S7404, determining the sub-pixel displacement according to the homography matrix.
Here, the sub-pixel displacement of each point in the k frame image relative to the reference frame image is obtained according to the homography matrix from the k frame image to the reference frame image.
As another possible implementation manner, an optical flow vector of each pixel from the current frame image to the reference frame image may be solved according to the brightness information around each point of the adjacent frame, and then a motion vector of the pixel is calculated according to the optical flow vector, so as to finally determine the sub-pixel displacement from the k frame image to the reference frame image.
And step S750, aligning the at least two frames of original images according to the sub-pixel displacement.
Here, for other frame images than the reference frame image in the at least two frames of original images, aligning to the reference frame image according to the sub-pixel displacement.
And step S760, performing weighted interpolation on at least two frames of aligned original images according to the gradient characteristics of the reference frame image to obtain a target image.
Here, step S760 is similar to step S540 in fig. 5, and is not repeated herein to avoid repetition.
Fig. 8 is an optional flowchart of the color interpolation method according to the embodiment of the present application, and as shown in fig. 8, the step S540 "performs weighted interpolation on at least two aligned original images according to the gradient feature of the reference frame image to obtain the target image" may be implemented by:
step S810, determining a first gradient feature in the horizontal direction and a second gradient feature in the vertical direction of the pixel to be interpolated in the reference frame image.
Here, for the selected reference frame image, the local gradient in the x direction and the local gradient in the y direction of the pixel to be interpolated can be respectively calculated by using a Sobel operator.
As a possible implementation, the gradient calculation may also use Robert, Prewitt, Lapacian operator, or other operators capable of indicating the degree of change of the image.
Step S820, determining an interpolation kernel of the pixel to be interpolated according to the first gradient feature and the second gradient feature.
Here, the interpolation kernel characterizes the weight of each sample point position around the pixel to be interpolated.
As a possible implementation manner, a gradient covariance matrix may be obtained according to the first gradient feature and the second gradient feature; decomposing the gradient covariance matrix to obtain the edge characteristics of the position to be interpolated; and determining an interpolation kernel of the position to be interpolated according to the edge characteristics.
Step S830, determining the color component of the pixel to be interpolated according to the interpolation kernel and the color component of each sampling point position.
And determining the color component of the pixel to be interpolated according to the color component and the weight of each sampling point position around the pixel to be interpolated.
And step 840, interpolating the at least two aligned frames of original images according to the color components of the pixels to be interpolated to obtain the target image.
And respectively interpolating on RGB channels by using the estimated interpolation kernel plug-in value weight and the multi-frame aligned CFA image to obtain a final output demosaiced RGB image.
As a possible implementation, the interpolation weights may be calculated directly in terms of distance, i.e. gaussian interpolation kernel.
As a possible implementation manner, the kernel regression may adopt other kernel regression estimation methods such as data adaptive kernel regression, equivalent kernel regression, or bilateral kernel regression to estimate the interpolation weight.
As a possible implementation, the interpolation can also directly adopt a double-triple interpolation method.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application utilizes multi-frame sub-pixel interpolation to perform CFA image demosaicing, expands the sub-pixel interpolation method from super-resolution to demosaicing, and utilizes multi-frame averaging to remove zipper effect and false color artifacts, effectively resist noise and retain image details to the maximum extent.
Fig. 9A is a data flow diagram of a multi-frame CFA image demosaicing method provided in the embodiment of the present application, and as shown in fig. 9A, a multi-frame CFA image 91 is grayed to obtain a multi-frame grayscale image 92, and then, after alignment processing, a reference frame image 93 is selected and a multi-frame subpixel displacement 94 is obtained through calculation, on one hand, gradient calculation is performed on the reference frame image 93 to obtain a gradient 95 of the reference frame image, and further, through kernel regression estimation, an interpolation kernel 96 is determined; on the other hand, a plurality of frames of CFA images are aligned by using a plurality of frames of sub-pixel displacement 94, and finally, the aligned plurality of frames of CFA images are interpolated according to an interpolation kernel 96 to obtain a single frame of RGB image 97.
Fig. 9B is a logic flow diagram of a multi-frame CFA image demosaicing method according to an embodiment of the present application, and as shown in fig. 9B, the method mainly includes the following steps:
step S910, inputting multiple frames of CFA images.
And step S920, converting the multi-frame CFA image into a multi-frame gray image.
Here, a CFA image containing four channels of RGGB per frame is converted into a single-channel grayscale image for estimation of alignment and interpolation kernels of a multi-frame CFA image.
Here, the CFA image of four channels is converted into a gray image of a single channel at one pixel position by the following formula (1).
Y=0.2*R+0.7*(Gr+Gb)/2+0.1*B (1);
Where Y denotes a luminance value of the pixel position, R denotes a pixel value of a red channel, B denotes a pixel value of a blue channel, and Gr and Gb denote pixel values of two green channels, respectively.
It should be noted that there is no difference between channels in the gradient of the image, and four channels need to be combined into one channel to perform gradient calculation, so as to estimate the interpolation kernel.
Step S930, performing image alignment on the multiple frames of CFA images.
Here, the process of image alignment mainly includes selecting a reference frame image and calculating a homography matrix, so as to obtain sub-pixel displacement of a plurality of frames of CFA images relative to the reference frame image.
In the implementation process, the method can be realized by the following steps:
step S9301, a reference frame image is selected from the multi-frame grayscale images.
Here, sharpness estimation is first performed on each frame of the grayscale image, and sharpness estimation of the image may be performed using a sharpness reference model based on local luminance characteristics.
In practice, the whole gray image is divided into k1*k2A window, here k1*k2Generally, an empirical value of 16 × 9 is taken, and a sharpness value (sharpness) of the image is calculated by a function EME as shown in the following formula (2):
Figure BDA0002455652750000141
wherein I is the local brightness of the image, k and l are variables, Imax,k,lAnd Imin,k,lRespectively representing the maximum and minimum brightness values in the k × l window.
Selecting one frame image with the maximum sharpness value in the multi-frame gray level images as a reference frame image Y for image alignment through a formula (2)0So as to facilitate the alignment of other frame images in the multi-frame CFA image to the reference frame image.
Step S9302, performing image alignment on the multiple frames of CFA images according to the reference frame image.
Here, homography matrix estimation based on SIFT feature point detection is employed for image alignment.
First, the k frame image Y is estimatedkTo Y0Homography matrix H ofKAs shown in the following equation (3), represented by the variable h1To h9The constituent 3x3 matrix is HK
Figure BDA0002455652750000142
Wherein w' w 1, YkThe coordinates of the converted point (x ', Y') are the point (x, Y) aligned to Y0The latter coordinates. Thus can be based on HKCalculating YkEach point in relative to Y0Is related to Y0And also YkOffset vector diagrams for two channels of the same size.
Fig. 10 is a schematic diagram of an estimation process of a homography matrix provided in an embodiment of the present application, and as shown in fig. 10, a homography matrix H is obtained through processing procedures of feature point detection 101, feature point description 102, feature point matching 103, calculation of homography matrix coefficients 104, and the likeKThe specific process is as follows:
SIFT is an original algorithm for detecting key points, which essentially searches key points (feature points) in different scale spaces, calculates the size, direction and scale information of the key points, and describes the feature points by using the key points formed by the information. The key points searched by SIFT are all quite prominent stable characteristic points which cannot be transformed by factors such as illumination, affine transformation and noise. After the feature points are obtained, the gradient histograms of the points around the feature points are used to form feature vectors
Figure BDA0002455652750000151
The feature vector is a description of the current feature point, Y0And YkTwo frames of images are used for solving the feature points to obtain two groups of feature points
Figure BDA0002455652750000152
And
Figure BDA0002455652750000153
the process of feature point matching is to find Y in the two sets of feature points found above by the euclidean distance as shown in the following formula (4)0And Yk4 or more sets of feature point pairs closest to each other on the two frame images:
Figure BDA0002455652750000154
wherein the content of the first and second substances,
Figure BDA0002455652750000155
is Y0Any one of the characteristic points in (b) is,
Figure BDA0002455652750000156
is YkAny one of the feature points, dPQIs composed of
Figure BDA0002455652750000157
And
Figure BDA0002455652750000158
the euclidean distance between them.
H can then be solved by Direct Linear Transformation (DLT)KThereby obtaining YkEach point in relation to Y0Of (2).
Suppose that the feature points are matched to obtain the feature points in YkHas the coordinates of (x) as the characteristic point1,y1),(x2,y2),...,(xt,yt) Corresponding to Y0The coordinate of the characteristic point is (x ″)1,y`1),(x`2,y`2),...,(x`t,y`t) Is prepared from HKActing on the above-described corresponding pairs, the following equation (5) can be obtained:
Figure BDA0002455652750000159
wherein, the coefficient A is a matrix with twice number of lines of corresponding point pairs, the coefficients of the corresponding point pair equations are stacked into a matrix, and the Singular Value Decomposition (SVD) algorithm can be used to find HKIs calculated by the least square solution ofEach frame YkRelative to Y0Displacement (v) ofxk,vyk) And obtaining the multi-frame sub-pixel displacement.
As a possible implementation, SURF, corner point or other features may also be used for feature point detection and description.
As a possible implementation mode, the optical flow vector of each pixel from the current frame to the reference frame is solved according to the brightness information around each point of the adjacent frame, and then the motion vector of the pixel, namely the sub-pixel displacement of a plurality of frames, is calculated according to the optical flow vector.
Step S940, a gradient calculation is performed on the reference frame image.
Here, local gradients Ix and Iy in the x and y directions are calculated for the selected reference frame image by using a 3x3 Sobel operator as the estimates of the image gradients, and fig. 11 is a Sobel operator for calculating the local gradients Ix and Iy provided in the embodiment of the present application.
As a possible implementation, the gradient calculation may also use Robert, Prewitt, Lapacian operators, or other operators capable of indicating the degree of image change.
Step S950, perform kernel regression estimation according to the local gradient of the reference frame image.
Here, the interpolation kernel is estimated from the local gradient of the reference frame image.
And (3) estimating the weight of each sampling position by adopting a controlled Kernel Regression (SKR) method, and selecting the nearest 3x3 sampling points to interpolate each point to be interpolated from the viewpoint of calculated amount.
Assuming that the gradient estimation values in the x and y directions of the position to be interpolated in step S940 are Ix and Iy, respectively, since the local edge structure and the gradient covariance are closely related, the interpolation kernel w can be estimated by using the local gradient covariance matrixn,i. The gradient covariance matrix is shown in equation (6):
Figure BDA0002455652750000161
to covariance matrix
Figure BDA0002455652750000162
SVD decomposition is carried out, and as shown in formula (7), eigenvectors e of the gradient main direction and the gradient sub direction are obtained1、e2And a characteristic value k of the principal and secondary directions1、k2
Figure BDA0002455652750000171
k1、k2It represents the edge feature, k, of the position of the point to be interpolated1/k2A larger ratio indicates a stronger one-way gradient, closer to the edge, if k is1、k2Are all very small, while k1/k2The ratio is close to 1, which means that the gradient is close to each direction in a flat area, if k is close to1、k2Are all large while k1/k2A scale close to 1 indicates a texture region, and the gradient has high frequency variation nearby, as shown in fig. 12, which shows different image structures such as edges, corners, smooth regions, and the like.
From the result Ω of the gradient covariance matrix decomposition, the interpolation kernel, i.e., the interpolation weight w, can be calculated by the formula (8)n,iThe value of (c).
Figure BDA0002455652750000172
Wherein d isi=[xi-x0,yi-y0]TRepresenting the distance vector from the sample point to the point to be interpolated, wn,iThe direction and the size of the Gaussian kernel are controlled by local gradient for each anisotropic Gaussian kernel, so that the aim of accurately estimating the missing color component on the edge is fulfilled.
As a possible implementation, the interpolation weights may be calculated directly in terms of distance, i.e. gaussian interpolation kernel.
As a possible implementation manner, the kernel regression may adopt other kernel regression estimation methods such as data adaptive kernel regression, equivalent kernel regression, bilateral kernel regression, and the like to estimate the interpolation weight.
In step S960, a single frame RGB image is interpolated and output.
Here, the final output demosaiced RGB image is obtained by interpolation on the RGB channels using the estimated interpolation kernel and the CFA images inputted by the plurality of frames, respectively.
Finding w in step S950n,iAfter interpolation, the required RGB demosaic image can be interpolated by sub-channels according to the formula (9) and the missing color components at the pixel position.
Figure BDA0002455652750000173
Where C (x, y) represents the value of the location to be interpolated, such as the green component to be interpolated as indicated by the solid arrow in FIG. 3, Cn,jA value representing the location of the ith sample point of the nth frame, such as the sampled green component, w, indicated by the dashed arrow in FIG. 3n,iThe weights representing the position of the ith sample point of the nth frame, the so-called interpolation kernel.
As a possible implementation, the interpolation can also directly adopt a double-triple interpolation method.
The embodiment of the application adopts a multi-frame demosaicing method based on sub-pixel interpolation, can effectively solve zipper artifacts and false color artifacts along abrupt color change brought by the traditional single-frame interpolation method, and can furthest retain image details while resisting noise. Meanwhile, compared with a single-frame network mosaic removing method, the multi-frame input method can reconstruct more real details and avoid the phenomenon of over-fitting of the details.
The embodiment of the application provides a color interpolation device, which comprises modules and units, wherein the modules and the units can be realized by a processor in a terminal; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 13 is a schematic structural diagram of a color interpolation apparatus provided in an embodiment of the present application, and as shown in fig. 13, the apparatus 1300 includes an obtaining module 1301, a converting module 1302, a first determining module 1303, and an interpolating module 1304, where:
the obtaining module 1301 is configured to obtain at least two frames of original images to be processed;
the conversion module 1302 is configured to convert each frame of original image of the at least two frames of original images into a corresponding grayscale image, so as to obtain at least two frames of grayscale images;
the first determining module 1303 is configured to determine a reference frame image from the at least two frames of grayscale images;
the interpolation module 1304 is configured to perform weighted interpolation on at least two aligned original images according to the gradient feature of the reference frame image to obtain a target image.
In some embodiments, the first determining module 1303 includes a first determining submodule, a second determining submodule, and a third determining submodule, wherein: the first determining submodule is used for determining the brightness characteristic of each frame of gray level image in the at least two frames of gray level images; the second determining submodule is used for determining the sharpness value of each frame of gray level image according to the brightness characteristic; and the third determining submodule is used for taking the frame image with the maximum sharpness value in each frame of gray level image as the reference frame image.
In some embodiments, the second determination submodule comprises a segmentation unit, a first determination unit and a second determination unit, wherein: the dividing unit is used for dividing each frame of gray level image into a plurality of area images; the first determining unit is configured to determine a maximum luminance value and a minimum luminance value of each of the plurality of region images; and the second determining unit is used for determining the sharpness value of each frame of gray level image according to the maximum brightness value and the minimum brightness value.
In some embodiments, the apparatus 1300 further comprises a second determining module and an aligning module, wherein: the second determining module is used for determining the sub-pixel displacement from the k frame image to the reference frame image; wherein the kth frame image is a gray image of each frame except the reference frame image in the at least two frames of gray images; and the alignment module is used for carrying out image alignment on the at least two frames of original images according to the sub-pixel displacement.
In some embodiments, the second determining module includes a feature point detecting sub-module, a feature point matching sub-module, a third determining sub-module, and a fourth determining sub-module, where the feature point detecting sub-module is configured to perform feature point detection on the at least two frames of grayscale images respectively to obtain a first feature point set of the reference frame image and a second feature point set of the k-th frame image; the feature point matching submodule is configured to perform feature point matching on the first feature point set and the second feature point set to obtain N groups of feature point pairs closest to each other in the reference frame image and the k frame image; wherein N is an integer greater than or equal to 4; the third determining submodule is configured to determine a homography matrix from the kth frame image to the reference frame image according to the coordinates of the N groups of feature point pairs; and the fourth determining submodule is used for determining the sub-pixel displacement according to the homography matrix.
In some embodiments, the interpolation module 1304 includes a fifth determination submodule, a sixth determination submodule, a seventh determination submodule, and an interpolation submodule, wherein: the fifth determining submodule is used for determining a first gradient characteristic of a pixel to be interpolated in the reference frame image in the horizontal direction and a second gradient characteristic in the vertical direction; the sixth determining submodule is configured to determine an interpolation kernel of the pixel to be interpolated according to the first gradient feature and the second gradient feature; the interpolation kernel represents the weight of each sampling point position around the pixel to be interpolated; the seventh determining submodule is configured to determine a color component of the pixel to be interpolated according to the interpolation kernel and the color component of each sampling point position; and the interpolation submodule is used for interpolating the at least two frames of aligned original images according to the color components of the pixels to be interpolated to obtain the target image.
Here, it should be noted that: the above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the color interpolation method is implemented in the form of a software functional module and sold or used as a standalone product, the color interpolation method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a device automatic test line including the storage medium to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Correspondingly, an embodiment of the present application provides a color interpolation device, fig. 14 is a schematic diagram of a hardware entity of the color interpolation device provided in the embodiment of the present application, and as shown in fig. 14, the hardware entity of the device 1400 includes: a processor 1401, a communication interface 1402 and a memory 1403, wherein
A processor 1401 generally controls the overall operation of the device 1400.
The communication interface 1402 may enable the device 1400 to communicate with other terminals or servers via a network.
The Memory 1403 is configured to store instructions and applications executable by the processor 1401, and may also cache data (e.g., image data) to be processed or already processed by the processor 1401 and modules in the device 1400, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Correspondingly, the embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the color interpolation method provided in the above embodiment.
Here, it should be noted that: the above description of the storage medium and device embodiments, similar to the description of the method embodiments above, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an automatic test line of a device to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of color interpolation, the method comprising:
acquiring at least two frames of original images to be processed;
converting each frame of original image of the at least two frames of original images into corresponding gray level images to obtain at least two frames of gray level images;
determining a reference frame image from the at least two frames of gray level images;
according to the gradient characteristics of the reference frame image, performing weighted interpolation on at least two frames of aligned original images to obtain a target image;
the method further comprises the following steps: determining sub-pixel displacement from the k frame image to the reference frame image; wherein the kth frame image is a gray image of each frame except the reference frame image in the at least two frames of gray images; according to the sub-pixel displacement, carrying out image alignment on the at least two frames of original images;
the determining the sub-pixel displacement of the k frame image to the reference frame image comprises:
respectively carrying out feature point detection on the at least two frames of gray level images to obtain a first feature point set of the reference frame image and a second feature point set of the k frame image;
performing feature point matching on the first feature point set and the second feature point set to obtain N groups of feature point pairs closest to each other in the reference frame image and the k frame image; wherein N is an integer greater than or equal to 4;
determining a homography matrix from the k frame image to the reference frame image according to the coordinates of the N groups of characteristic point pairs;
and determining the sub-pixel displacement according to the homography matrix.
2. The method of claim 1, wherein determining a reference frame image from the at least two frames of grayscale images comprises:
determining the brightness characteristic of each frame of gray scale image in the at least two frames of gray scale images;
determining the sharpness value of each frame of gray level image according to the brightness characteristics;
and taking the frame image with the maximum sharpness value in each frame of gray level image as the reference frame image.
3. The method as claimed in claim 2, wherein said determining sharpness values for said each frame of gray scale image based on said luminance features comprises:
dividing each frame of gray level image into a plurality of area images;
determining a maximum luminance value and a minimum luminance value of each of the plurality of region images;
and determining the sharpness value of each frame of gray level image according to the maximum brightness value and the minimum brightness value.
4. The method according to any one of claims 1 to 3, wherein the performing weighted interpolation on at least two aligned frames of original images according to the gradient feature of the reference frame image to obtain a target image comprises:
determining a first gradient characteristic of a pixel to be interpolated in the reference frame image in the horizontal direction and a second gradient characteristic in the vertical direction;
determining an interpolation kernel of the pixel to be interpolated according to the first gradient feature and the second gradient feature; the interpolation kernel represents the weight of each sampling point position around the pixel to be interpolated;
determining the color component of the pixel to be interpolated according to the interpolation kernel and the color component of each sampling point position;
and interpolating the at least two frames of aligned original images according to the color components of the pixels to be interpolated to obtain the target image.
5. A color interpolation apparatus, comprising an obtaining module, a converting module, a first determining module, a second determining module, an aligning module, and an interpolating module, wherein:
the acquisition module is used for acquiring at least two frames of original images to be processed;
the conversion module is used for converting each frame of original image of the at least two frames of original images into corresponding gray level images to obtain at least two frames of gray level images;
the first determining module is used for determining a reference frame image from the at least two frames of gray level images;
the interpolation module is used for performing weighted interpolation on at least two aligned original images according to the gradient characteristics of the reference frame image to obtain a target image;
the second determining module is used for determining the sub-pixel displacement from the k frame image to the reference frame image; wherein the kth frame image is a gray image of each frame except the reference frame image in the at least two frames of gray images; the image processing device is further used for respectively carrying out feature point detection on the at least two frames of gray level images to obtain a first feature point set of the reference frame image and a second feature point set of the k frame image; performing feature point matching on the first feature point set and the second feature point set to obtain N groups of feature point pairs which are closest to each other in the reference frame image and the k frame image; wherein N is an integer greater than or equal to 4; determining a homography matrix from the k frame image to the reference frame image according to the coordinates of the N groups of characteristic point pairs; determining the sub-pixel displacement according to the homography matrix;
and the alignment module is used for carrying out image alignment on the at least two frames of original images according to the sub-pixel displacement.
6. The apparatus of claim 5, wherein the first determination module comprises a first determination sub-module, a second determination sub-module, and a third determination sub-module, wherein:
the first determining submodule is used for determining the brightness characteristic of each frame of gray level image in the at least two frames of gray level images;
the second determining submodule is used for determining the sharpness value of each frame of gray level image according to the brightness characteristic;
and the third determining submodule is used for taking the frame image with the maximum sharpness value in each frame of gray level image as the reference frame image.
7. A color interpolation device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 4 when executing the program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN202010305516.2A 2020-04-17 2020-04-17 Color interpolation method and device, equipment and storage medium Active CN111510691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010305516.2A CN111510691B (en) 2020-04-17 2020-04-17 Color interpolation method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010305516.2A CN111510691B (en) 2020-04-17 2020-04-17 Color interpolation method and device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111510691A CN111510691A (en) 2020-08-07
CN111510691B true CN111510691B (en) 2022-06-21

Family

ID=71864733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010305516.2A Active CN111510691B (en) 2020-04-17 2020-04-17 Color interpolation method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111510691B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184575A (en) * 2020-09-16 2021-01-05 华为技术有限公司 Image rendering method and device
CN112598577B (en) * 2020-12-24 2022-02-11 暨南大学 Image interpolation method, system and storage medium based on dislocation sampling
CN112634165B (en) * 2020-12-29 2024-03-26 广州光锥元信息科技有限公司 Method and device for image adaptation VI environment
CN116547979A (en) * 2021-01-15 2023-08-04 华为技术有限公司 Image processing method and related device
CN113160095B (en) * 2021-05-25 2023-05-19 烟台艾睿光电科技有限公司 Infrared detection signal pseudo-color processing method, device, system and storage medium
CN113538538B (en) * 2021-07-29 2022-09-30 合肥的卢深视科技有限公司 Binocular image alignment method, electronic device, and computer-readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004152148A (en) * 2002-10-31 2004-05-27 Fuji Photo Film Co Ltd Dynamic image composition method and device, program
US7412107B2 (en) * 2004-12-17 2008-08-12 The Regents Of The University Of California, Santa Cruz System and method for robust multi-frame demosaicing and color super-resolution
KR100754661B1 (en) * 2006-07-24 2007-09-03 삼성전자주식회사 Method and apparatus for color interpolation in digital camera
KR102224851B1 (en) * 2014-12-11 2021-03-08 삼성전자주식회사 Image Processing Device and Image Processing System performing sub-pixel interpolation
CN104574277A (en) * 2015-01-30 2015-04-29 京东方科技集团股份有限公司 Image interpolation method and image interpolation device
CN105141838B (en) * 2015-08-19 2018-08-07 上海兆芯集成电路有限公司 Demosaicing methods and the device for using this method
CN108734668B (en) * 2017-04-21 2020-09-11 展讯通信(上海)有限公司 Image color recovery method and device, computer readable storage medium and terminal
CN109285121A (en) * 2017-07-20 2019-01-29 北京凌云光子技术有限公司 A kind of Bayer image restoring method
CN109788261B (en) * 2017-11-15 2021-06-22 瑞昱半导体股份有限公司 Color offset correction method and device

Also Published As

Publication number Publication date
CN111510691A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111510691B (en) Color interpolation method and device, equipment and storage medium
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
Pekkucuksen et al. Gradient based threshold free color filter array interpolation
US9578259B2 (en) Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9275463B2 (en) Stereo image processing device and stereo image processing method
WO2013031367A1 (en) Image processing device, image processing method, and program
JP5306563B2 (en) Imaging apparatus and image generation method
JP5047287B2 (en) Sparse integral image descriptor with application to motion analysis
CN110430403B (en) Image processing method and device
CN102106150A (en) Imaging processor
US20110141321A1 (en) Method and apparatus for transforming a lens-distorted image to a perspective image in bayer space
JP2010016812A (en) Image processing apparatus and method, and computer-readable medium
Chang et al. Stochastic color interpolation for digital cameras
CN101778297B (en) Interference elimination method of image sequence
Asiq et al. Efficient colour filter array demosaicking with prior error reduction
KR101327790B1 (en) Image interpolation method and apparatus
WO2015083502A1 (en) Image processing device, method and program
Sreegadha Image interpolation based on multi scale gradients
Cho et al. Improvement on Demosaicking in Plenoptic Cameras by Use of Masking Information
WO2015083499A1 (en) Image processing device, method and program
Jeong et al. Edge-Adaptive Demosaicking for Reducing Artifact along Line Edge
Sibiryakov Sparse projections and motion estimation in colour filter arrays
Gupta Gradient based Multispectral Demosaicking Method using Single Sensor Array
Cheng et al. An adaptive color plane interpolation method based on edge detection
Liu Aggregating color and absolute difference for CFA interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant