CN117745531A - Image interpolation method, apparatus and readable storage medium - Google Patents

Image interpolation method, apparatus and readable storage medium Download PDF

Info

Publication number
CN117745531A
CN117745531A CN202410183980.7A CN202410183980A CN117745531A CN 117745531 A CN117745531 A CN 117745531A CN 202410183980 A CN202410183980 A CN 202410183980A CN 117745531 A CN117745531 A CN 117745531A
Authority
CN
China
Prior art keywords
rendered
pixel
pixels
value
orthogonal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410183980.7A
Other languages
Chinese (zh)
Inventor
高蓉
覃正才
刘德平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruidan Microelectronics Technology Shanghai Co ltd
Original Assignee
Ruidan Microelectronics Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruidan Microelectronics Technology Shanghai Co ltd filed Critical Ruidan Microelectronics Technology Shanghai Co ltd
Priority to CN202410183980.7A priority Critical patent/CN117745531A/en
Publication of CN117745531A publication Critical patent/CN117745531A/en
Pending legal-status Critical Current

Links

Abstract

An image interpolation method, apparatus, and readable storage medium, wherein the method comprises: according to the rendered pixels in the direction orthogonal to the current sampling direction, obtaining the characteristic values of the rendered pixels; determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered based on the characteristic values of the rendered pixels and the pixels to be rendered currently, and acquiring orthogonal characteristic information in the orthogonal direction with the current sampling direction, wherein the orthogonal characteristic information is the weighted sum of the characteristic values of the rendered pixels and the pixels to be rendered currently; performing secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information; and adjusting interpolation parameters of the current pixel to be rendered according to the secondary judgment result. The method not only can effectively improve the calculation speed and save the calculation force, but also can improve the quality of image processing.

Description

Image interpolation method, apparatus and readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image interpolation method, apparatus, and readable storage medium.
Background
Conventional display screens are typically composed of a two-dimensional array of pixels. Each pixel comprises pixel modules arranged in a certain order, and each pixel module generally comprises three sub-pixel channels of red (R), green (G) and blue (B). The sub-pixels of each pixel are mixed to display the pixel, thereby displaying a corresponding image on the display device.
However, with the development of display technology, in the field of micro-display, the requirements on visual resolution are higher, but the requirements are limited by the difficulty of miniaturization of the LED module and miniaturization of the pixel circuit, and there is a certain technical bottleneck in improving the physical resolution of the display.
Taking an AMOLED screen as an example, the arrangement sequence of the sub-pixels of the display panel is no longer the conventional R, G, B sequence, and common arrangement modes of the sub-pixels include Pentile type, delta type and the like. Such non-conventional RGB-arranged display panels have various subpixel arrangements, and corresponding interpolation methods are required for each subpixel arrangement to convert RGB into the corresponding subpixel structure. For example, on a display screen with sub-pixels arranged in RGB-Delta, sub-pixels corresponding to 3m×2n×3 are output to sub-pixels of 2m×2n×3 through rendering, or sub-pixels corresponding to 3m×2n×2 are output to sub-pixels of 3m×2n×3 through rendering.
The conventional method mostly adopts adjacent co-channel pixels as sampling data to perform interpolation calculation of output sub-pixels, but due to the limitation of the sampling range of the sampling data or the limitation of hardware storage space, the characteristics of the sampling pixels on a larger scale cannot be extracted well, so that the image characteristics which are representative on the larger scale are easy to lose in the pixel rendering process.
The Chinese patent application with the application number of 202111539264.0 provides a sub-pixel rendering method, which reads sub-pixel data of each pixel, obtains a bright-dark state graph of each sub-pixel by comparing the sub-pixel data with a set threshold value, determines a data rendering weight coefficient of the corresponding sub-pixel according to the bright-dark state graph, and calculates a pixel output data value of the current pixel by combining the bright-dark state graph of the current pixel and upper, lower, left and right adjacent pixels under each sub-pixel, thereby reducing the number of 1/2 sub-pixels compared with a traditional stripe type RGB (red, green and blue) array display.
The Chinese patent application with the application number of 202211411060.3 provides an image interpolation method, which is used for determining a weight factor in an interpolation algorithm formula of a source pixel and a target pixel based on the magnitude relation between gray value deviation between two adjacent source pixels and a preset threshold value, and further calculating a pixel value corresponding to the target pixel.
The Chinese patent application with the application number of 202011537468.6 provides an image processing mechanism, which obtains RGBW pixel data after normalizing brightness values and saturation values of a current frame picture, and outputs the RGBW pixel data after performing inverse normalization processing based on the RGBW pixel data, so that the display effect is improved.
The information disclosed in the background section of the invention is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
In order to solve the technical problems, the invention provides an image interpolation method, an image interpolation device and a readable storage medium.
According to an aspect of the present invention, there is provided an image interpolation method including: according to the rendered pixels in the direction orthogonal to the current sampling direction, obtaining the characteristic values of the rendered pixels; determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered based on the characteristic values of the rendered pixels and the pixels to be rendered at present, and acquiring orthogonal characteristic information in a direction orthogonal to the current sampling direction; the orthogonal characteristic information is the weighted sum of the characteristic value of the rendered pixel and the pixel to be rendered currently; performing secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information; and adjusting interpolation parameters of the current pixel to be rendered according to the secondary judgment result.
According to an aspect of the present invention, there is provided an image interpolation apparatus including: the characteristic value calculation module is used for obtaining the characteristic value of the rendered pixel according to the rendered pixel in the direction orthogonal to the current sampling direction; the weighting calculation module is used for determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered according to the characteristic values of the rendered pixels and the pixels to be rendered, which are obtained by the characteristic value calculation module, and obtaining orthogonal characteristic information in a direction orthogonal to the current sampling direction; wherein the orthogonal feature information is the weighted sum of the feature value of the rendered pixel and the pixel to be rendered; the secondary judgment module is used for carrying out secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information; and the interpolation module is used for determining interpolation parameters of the current pixel to be rendered according to the secondary judgment result.
According to an aspect of the present invention, there is provided a computer readable storage medium storing program data which, when run, is operable to perform the above method.
Compared with the prior art, the method and the device not only sample the pixels along the sampling direction, but also add the pixels in the orthogonal direction of the sampling direction, increase the number of the pixels participating in calculation, make up for the possible data missing of the direction dimension when the pixels are rendered, and have lower tooth pitch sense.
The method and apparatus of the present invention have other features and advantages which will be set forth in detail in the accompanying drawings and the detailed description that follow, which together serve to explain certain principles of the invention.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the invention.
FIG. 1 is a schematic diagram of a frame of one embodiment of an image interpolation apparatus for improving pixel rendering in accordance with the present invention;
FIG. 2 is a flow chart of one embodiment of a method of image interpolation for improving pixel rendering according to the present invention;
FIG. 3 is a schematic flow chart of step S2 in an embodiment of a method for improving pixel rendering in an image interpolation according to the present invention;
FIG. 4 is a schematic flow chart of step S2 in yet another embodiment of the image interpolation method for improving pixel rendering according to the present invention;
FIG. 5 is a schematic diagram of a frame of one embodiment of an image interpolation apparatus for improving pixel rendering in accordance with the present invention;
fig. 6 is a schematic diagram of a frame of still another embodiment of an image interpolation apparatus for improving pixel rendering according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the drawings in the embodiments of the present invention, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and thus the protection scope of the present invention is more clearly and clearly defined. It should be apparent that the described embodiments of the invention are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this application, the use of "or" means "and/or" unless stated otherwise. Furthermore, the use of the terms "include," "include," and "comprising," among other forms, are not limiting. In addition, unless specifically stated otherwise, terms such as "element" or "component" encompass both elements and components comprising one module as well as elements and components comprising more than one module.
The display resolution of a display screen is generally expressed by the product of the pixels on the X-axis and the pixels on the Y-axis, for example 1024X 768 pixels in the horizontal direction and 768 pixels in the vertical direction. The display medium may be a display screen of an electronic device, or a display screen of another device connected to the electronic device, for example, a mobile phone, a smart watch, VR glasses, a wearable device, or the like.
In the prior art, it is common practice in the prior art to traverse all pixel points of an image to be rendered according to the size of a sampling window, and obtain output pixels by adopting a proper sampling and rendering algorithm. However, the inventor has found that, in practice, although the sampling algorithm theoretically traverses all pixels due to the limitations of computational effort and data storage, it always samples and calculates the corresponding interpolation parameters according to the same dimension, such as along the row direction or the column direction, so that only 1 or more rows/columns of data in the sampling direction are referenced at each sampling, thus possibly making a false decision on the global contribution of the current sampling point. For example, in some cases, there are pixels in the image that can represent significant features of the image on a larger scale, however, due to existing algorithm limitations, such as sampling window size setting, algorithm logic, etc., these pixels may be ignored in a single sample, which are referred to as outliers, which contain significant features of the image that cannot be effectively transferred during the current sampling algorithm step. Such erroneous judgment further reflects that the difference is further amplified in the subsequent interpolation parameter calculation process, resulting in the quality of the output image being affected.
It is noted that the above limitation of only referencing single-dimensional sampling data during sampling is addressed, the present application proposes the following inventive concept, that is, based on the rendered pixel, feature information in the orthogonal direction is obtained, the orthogonal feature information is utilized to perform secondary judgment on the pixel to be rendered, and the interpolation parameter of the current pixel to be rendered is adjusted according to the secondary judgment result, so as to adjust the rendering result, and improve the image quality.
Referring to fig. 1, an image interpolation apparatus 100 for improving pixel rendering is provided according to some embodiments of the present invention, and may include at least: a power supply 110, a processor 120, a memory 130, a display screen 140.
It will be appreciated that the configuration illustrated in the embodiments of the present invention does not constitute a specific limitation on the image interpolation apparatus. In other embodiments of the present application, the image interpolation device may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Wherein the power supply 110 is used for supplying power to the processor 120, the memory 130, the display screen 140, and the like. The power supply 110 may further include battery capacity monitoring and the like. In some embodiments, the power supply 110 may be an integrated power supply or may be an external power supply.
Processor 120 may include one or more processing modules, for example: processor 120 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a video codec, a digital signal processor (digital signal processor, DSP), and/or a neural network processor (neural network processing unit, NPU), etc. Wherein the different processing modules may be separate devices or may be integrated in one or more processors. In particular, the digital signal processor may be used to process digital signals, or to fourier transform frequency bin energies, etc. A video codec may be used to compress or decompress a video signal. The NPU realizes autonomous learning based on input signals by referring to a signal transmission mode between biological neural network structures, for example, the NPU can realize functions such as image recognition, voice recognition and the like. In some embodiments, the processor 120 may further include a controller for generating operation control signals through instruction operation codes and timing signals to complete instruction fetching and instruction execution control to implement the set functions.
Wherein the memory 130 may be used to store code and data. The memory 130 may be internal storage or may include external storage, for example, the external storage may communicate with the processor 120 through an interface to store and read data and/or code. Memory 130 may also include internal storage devices for storing computer-executable program code, enabling storage of code for an operating system, application programs, and the like. In some embodiments, processor 120 may further include one or more caches for storing instructions and data in order to reduce latency of repeated accesses.
The display 140 may include a display panel, such as a liquid crystal display LCD, an organic light emitting diode OLED, an active matrix organic light emitting diode or an active matrix organic light emitting diode AMOLED, a flexible light emitting diode FLED, miniled, micro led, micro OLED, quantum dot light emitting diode QLED, or the like. In some embodiments, the image interpolation device 100 may include one or more display screens 140 for displaying images according to the output values.
In some embodiments, the image interpolation apparatus 100 obtains the feature value of the rendered pixel according to the rendered pixel in the direction orthogonal to the current sampling direction during the sampling process; determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered based on the characteristic values of the rendered pixels and the pixels to be rendered at present, and acquiring orthogonal characteristic information in a direction orthogonal to the current sampling direction; the orthogonal characteristic information is the weighted sum of the characteristic value of the rendered pixel and the pixel to be rendered currently; performing secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information; and adjusting interpolation parameters of the current pixel to be rendered according to the secondary judgment result. The orthogonal characteristic information is adopted to supplement the data reference in the direction orthogonal to the sampling direction, so that the image rendering quality is effectively improved with less cost increase under the condition of not affecting the existing sampling method.
Further, in some embodiments, the image interpolation apparatus 100 may further include increasing a contribution ratio of the pixel to be rendered in the output result. For example, various sharpening modes can be adopted, the weight of the pixel to be rendered is increased during interpolation calculation, the pixel to be rendered is sharpened, and the image characteristics of the pixel to be rendered are enlarged. For another example, the pixel to be rendered may also be copied directly to the corresponding output pixel.
The image interpolation device 100 may extract an image to be processed through the memory 130, may capture an image or video through a camera, and may receive the image to be processed through the communication module.
Further, in some embodiments, the image interpolation apparatus 100 may include a camera (not shown) to acquire still images or video. The camera at least comprises a lens and an optical projection structure, so that an object generates an optical image through the lens and projects the optical image to a photosensitive element such as a CCD charge coupled device or a CMOS tube, the photosensitive element converts an optical signal into an electric signal, and then the electric signal is processed to form an image signal in RGB/YUV format. In some embodiments, the electronic device 100 may include one or more cameras.
Further, in some embodiments, the image interpolation device 100 may include a communication module (not shown) for receiving an image to be processed. The communication module adopts WiFi, bluetooth, NFC, infrared and other modes to communicate with other equipment, and an image to be processed is obtained. In some embodiments, at least some of the functional modules of the communication module may be provided in the processor 120 or in the same device as some of the modules of the processor 110.
Referring to fig. 2, according to an aspect of the present invention, based on the above components of the image interpolation apparatus 100, there is provided an image interpolation method for improving pixel rendering, comprising:
step S1, according to the rendered pixels in the direction orthogonal to the current sampling direction, the characteristic values of the rendered pixels are obtained. In the initial state, the characteristic value of the rendered pixel is the pixel value of the first rendered pixel.
For example, the arithmetic mean of the rendered pixels may be employed as the characteristic value of the rendered pixels, i.e.,. Wherein (1)>Is the pixel value of the rendered pixel. As another example, the harmonic mean of rendered pixels may also be employed as the orthogonal feature information, i.e.,
acc0=. Wherein (1)>Is the pixel value of the rendered pixel.
In addition, the feature value of the corresponding rendered pixel can be obtained by adopting the algorithm of the geometric average value, and in specific practice, the feature value of the corresponding rendered pixel can be obtained by selecting different average value solving modes according to the image requirement to be rendered.
Step S2, determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered based on the characteristic values of the rendered pixels and the pixels to be rendered currently, and acquiring orthogonal characteristic information in the direction orthogonal to the current sampling direction.
In some implementations, the orthogonal feature information is a weighted sum of the rendered pixel feature values and the current pixel to be rendered. For example, the number of the cells to be processed,calculating the orthogonal characteristic information acc, and respectively carrying out +_0 on the rendered pixel characteristic value acc0 and the current pixel value to be rendered>Weighting is performed. Wherein the weight values alpha and beta are all values smaller than 1, and the sum of the alpha and the beta is 1. In a specific embodiment, the larger the weight value beta is, the larger the specific gravity of the pixel value to be rendered is, and the stronger the instantaneity of the image is; otherwise, the smaller the weight value beta is, the better the stability is for the pixel value to be rendered currently.
Taking an example of rendering and displaying a 1920 x 1080 image through a 1280 x 1080 display screen, it is assumed that a sampling module of 3*3 is used for sampling an input pixel point, and a rendering result is obtained. Since the nine-grid sampling module generally adopts a three-row or three-column form for iteration when the image is rendered, that is, the data information of another spatial dimension is not added when the data information is cached according to every three rows or every three columns. Taking a nine-square block rendering three rows as an example, in the rendering process, all the sampling information that can be obtained is the current three rows of image information, and the traversed pixel information outside the current three rows of pixels is not generally saved based on the consideration of the buffer space. The pixel data of the current three rows can be gradually added into calculation along with the sampling process, so that even if all pixels in the three rows are traversed, the column direction always only contains three groups of data each time, and more pixel data before and after the pixel data cannot be referenced, so that certain defects exist in the data in the column direction. The lack of sampling data in the direction orthogonal to the sampling direction can cause erroneous judgment in some scenes, so that interpolation parameters corresponding to pixel points to be rendered are not consistent with the contribution of the interpolation parameters to the global image, and the image rendering effect is poor. In the embodiments of the application, the local pixels in the direction orthogonal to the current sampling direction are added into calculation by adding the minimum buffer space, and after the pixel points to be rendered are subjected to secondary judgment by adopting the orthogonal characteristic information, the corresponding interpolation parameters are adjusted according to the secondary judgment result, so that the data loss of the direction dimension possibly existing when the pixel rendering is performed is made up, and the great improvement of the image rendering effect is replaced by the minimum cost.
Specifically, step S2 may further include: and determining a weight value alpha corresponding to the characteristic value of the rendered pixel and a weight value beta corresponding to the pixel to be rendered currently according to the current image characteristic, the historical data and the experience value.
In one embodiment, the weight value β corresponding to the current pixel to be rendered is 10%, and the weight value corresponding to the feature value of the rendered pixel is 90%, according toThe orthogonal feature information acc is calculated. The weight value β corresponding to the current pixel to be rendered is 0.1, that is, the influence of the current pixel to be rendered on the orthogonal feature information acc only accounts for 10%, wherein at least the last 10 traversed pixels in the column direction where the current pixel to be rendered is located are considered in the data of the orthogonal feature information acc.
In another embodiment, the initial value of acc is the value of the first row/column when computing the ith rowj columns of pixels C to be rendered i,j When the corresponding acc0 is adopted, all pixel points from the 1 st row to the i-1 st row in the j-th column are sequentially arranged in sequenceAnd (5) obtaining the characteristic value after iteration. In this case, even when the pixels of the first few lines are sampled, the loss of the orthogonal feature information is not caused by the smaller amount of the preamble data. On the other hand, the orthogonal feature information is formed by iteration such that, although only the rendered pixel feature value acc0 is retained, the effect of all the pixel points in the preamble is actually considered. In addition, the influence degree of the accumulated pixel on the current pixel can be effectively controlled by adjusting the corresponding weight coefficient. For example, the pixel weight farther from the current pixel to be rendered is reduced, reducing its impact on the result.
It is easy to find that by setting the orthogonal characteristic information acc, the limitation of the sampling window is broken through in the direction orthogonal to the sampling direction at a very low cost, and the data of the other horizontal dimension is supplemented, so that the image rendering quality is improved.
According to still further embodiments of the present invention, step S2 may further include: and according to the change rule of the rendered pixel value along with the pixel point position, adaptively adjusting a weight value beta corresponding to the pixel value to be rendered currently, and further obtaining another weight value alpha.
For example, step S2 may include: according to the intensity of the rendered pixel values of the whole row along with the position change in the current sampling direction, carrying out self-adaptive adjustment on the weight value beta corresponding to the pixels to be rendered of the next row or rows; or according to the intensity of the rendered pixel values of the whole column along with the position change in the current sampling direction, carrying out self-adaptive adjustment on the weight value beta corresponding to the pixels to be rendered of the next column or columns. For example, if the intensity of the rendered pixel value of the current row or column varies with position, the weight value β corresponding to the pixel to be rendered of the next row or column will be adjusted to be smaller, and vice versa.
Specifically, the pixel value of each rendered pixel and the second derivative value of its corresponding position can be calculated asThe intensity of the rendered pixel values as a function of position. In one implementation, in some embodiments, input pixels are sampled and rendered in a row-by-row fashion. When the current ith row of pixels are traversed, as the current ith row of pixels are rendered, before the i+1th row of pixels to be rendered are sampled, calculating second derivative values of the pixel points relative to the corresponding positions of the pixel values according to the pixel values of the pixel points to be rendered in the ith row, and carrying out self-adaptive adjustment on the weight value beta corresponding to the pixels to be rendered in the same column direction of the next row according to the result. Wherein when the ith row and the jth column are rendered pixels C i,j Relative to its corresponding position (x i,j ,y i,j ) Is a second derivative value sigma of (2) i,j When the pixel point is zero, the rendered pixel point at the position is an edge transition position or an isolated point, and accordingly, the weight value beta corresponding to the pixel to be rendered in the next row corresponding to the same column can be increased.
For example, when the pixel value is sensitive to resource consumption, the first derivative of the pixel value and the corresponding position can be calculated according to the rendered pixel value of the current row or column, and the weight value beta corresponding to the pixel to be rendered of the subsequent row or column can be adaptively adjusted according to the calculation result.
In other embodiments, the weight value β corresponding to the pixel to be rendered in the subsequent row or column may be adaptively adjusted according to other change rules of the rendered pixel point and the corresponding position of the pixel point in the current sampling direction, for example, the change rate of brightness along with the position, etc.
And S3, performing secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information.
In some embodiments, step S3 may include determining whether the current pixel to be rendered is a outlier. Specifically, a pixel to be rendered may be compared with the orthogonal feature information, and when a difference between a pixel value of the pixel to be rendered and the orthogonal feature information exceeds a set threshold, the pixel to be rendered is taken as a special point.
Because the orthogonal characteristic information expands in the orthogonal direction of the sampling direction, the method is to be renderedThe dye pixels provide data replenishment in this direction that is not limited by the sampling window. For example, according toThe weight value β corresponding to the current pixel to be rendered is 10%, the weight value corresponding to the rendered pixel characteristic value is 90%, and the rendered pixel characteristic value includes the influence of all pixel points in the same preamble. When the difference between the pixel to be rendered and the orthogonal feature information is too large, the difference between the pixel to be rendered and the pixel traversed in the orthogonal direction is relatively large, so that the pixel to be rendered can be considered as a specific point. By comparing the pixel to be rendered with the orthogonal characteristic information, whether the pixel to be rendered belongs to a special point or not can be judged more accurately, and therefore proper operation is adopted for the pixel to be rendered.
In some embodiments, the set threshold may be determined based on a color difference or other means that a human eye can discern. When the pixel to be rendered is a special point, there is usually at least a large visual difference with at least one adjacent pixel point, which cannot be ignored by naked eyes, and the large visual difference is possible to be color block or line switching of an image or replacement of image elements. For example, a threshold value of 16 or 18 may be set, and it is determined whether the difference between the pixel to be rendered and the orthogonal feature information exceeds the threshold value, and if the difference is greater than the threshold value, the pixel to be rendered is a outlier. In some embodiments, the variance or ratio or other means may also be used to compare the pixel to be rendered to the set threshold.
It is not difficult to find that according to the above steps of the present application, the reference range of the pixel to be rendered is actually enlarged by the orthogonal feature information, so that whether the pixel to be rendered belongs to the outlier can be more accurately judged. In addition, due to the adoption of the orthogonal characteristic information, only one row or one column of data storage quantity is required to be increased in the actual operation process, and compared with the calculation of the data quantity, the data burden is very little.
In other embodiments, step S3 may also include determining whether the current pixel to be rendered is a noise point.
Referring to fig. 3, after step S3, step S4 may be further included: and adjusting interpolation parameters of the pixels to be rendered according to the secondary judgment result.
For example, in some embodiments, equidistant sampling is employed, wherein the sampling function f (x) and the interpolation function g (x) have the following relationship:
wherein,for interpolation node +.>Is a parameter dependent on the sampled data, i.e.>. h is a sampling increment, and about any two nodes are added>And->All have->. u is an interpolation kernel function, and +.>
Step S4 may include step S41: and when the step S3 judges that the pixel to be rendered needs to be strengthened, the corresponding interpolation parameters are enlarged, the pixel to be rendered is sharpened, and the image characteristics of the pixel to be rendered are enlarged. For another example, the pixel to be rendered may also be copied directly to the corresponding output pixel.
In still other embodiments, step S4 may further include step S42, determining whether the pixel to be rendered is a noise point. For example, an existing noise judgment algorithm can be adopted to judge the pixel, and when the pixel is judged to be the noise, the corresponding interpolation parameter is reduced. Specifically, it can be directly excluded.
Referring to fig. 4, in some embodiments, step S1 may further include, before calculating the orthogonal feature information: and S0, acquiring an image to be processed, and sampling the image to be processed.
It should be noted that, the embodiments of the present application are not limited to the algorithm used in the sampling direction, and the steps of the sampling algorithm that are completely different from those of the embodiments of the present application may be adopted according to actual needs, and the sampling algorithm, window setting, and the like do not limit the embodiments of the present application. In fact, the above embodiments may be used alone or multiplexed with different sampling or rendering algorithms.
In some embodiments, the image to be processed may be an image frame of a video stream, or may be an image captured by a camera. In particular, acquiring the image to be processed may include: and acquiring the images to be processed sent by other equipment, or acquiring the images by a camera, or intercepting the images from a video stream. For example, the image to be processed may be acquired from an external storage device or a photographing device through a usb interface or a bluetooth, infrared, wiFi transmission, etc. In other embodiments, the image may be captured by a camera based on the instruction, or the image may be captured by video based on the instruction to obtain a set of image frames, and the image to be processed may be captured from the video stream.
Referring to fig. 5, according to another aspect of the present invention, the image interpolation apparatus 100 may further include:
the feature value calculating module 510 is configured to obtain a feature value of a rendered pixel according to the rendered pixel in a direction orthogonal to the current sampling direction. In the initial state, the characteristic value of the rendered pixel is the pixel value of the first rendered pixel;
the weighting calculation module 520 is configured to determine weight values corresponding to the feature values of the rendered pixels and the pixels to be rendered respectively according to the feature values of the rendered pixels and the pixels to be rendered obtained by the feature value calculation module 510, and obtain orthogonal feature information in a direction orthogonal to the current sampling direction.
In some implementations, the weighting calculation module 520 employs the weighted sum of the rendered pixel feature values and the current pixel to be rendered as the orthogonal feature information. For example, can be adoptedAnd calculating the orthogonal characteristic information acc.
In some embodiments, the weighting calculation module 520 may further set the weight values corresponding to the rendered pixel feature values and the pixels to be rendered, respectively. The weight values α and β corresponding to the rendered pixel characteristic value acc0 and the current pixel value to be rendered are values smaller than 1, and the sum of α and β is 1. In a specific embodiment, the weight value β corresponding to the pixel to be rendered at present should be at least smaller than 0.5.
In some embodiments, the weighting calculation module 520 may adaptively adjust the weight β corresponding to the current pixel value to be rendered according to the rule that the rendered pixel value changes with the position of the pixel point, and further obtain another weight α.
Referring to fig. 6, further may include: the secondary judgment module 530 is adapted to perform secondary judgment on the current pixel to be rendered according to the calculation result of the weighting calculation module 520.
In some embodiments, the secondary determining module 530 may be a difference calculating module adapted to calculate an absolute value of a difference between the pixel to be rendered and a set threshold. In other embodiments, the secondary determining module 530 may also be a ratio calculating module, or a variance calculating module.
When the determination result of the secondary determining module 530 is based on, an interpolation module 540 may be further included, adapted to determine interpolation parameters of the current pixel to be rendered. For example, the weight of the pixel to be rendered can be increased during interpolation calculation, the pixel to be rendered can be sharpened, and the image characteristics of the pixel to be rendered can be enlarged.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present invention is only illustrative, and does not constitute a structural limitation of the image interpolation apparatus 100. In other embodiments of the present application, a module connection manner different from that of the above embodiment, or a combination of hardware and software may be used. Acquisition of the image to be processed may also be achieved, for example, by a camera or video codec, etc.
Furthermore, according to still another aspect of the present invention, there is also provided a computer-readable storage medium storing program data which, when executed, is configured to perform the above-described respective embodiments of the method of the present application.
Compared with the prior art, the method and the device acquire orthogonal feature information according to the rendered pixels, add local pixels in the direction orthogonal to the current sampling direction into calculation by adding little buffer space, and perform secondary judgment on the pixel points to be rendered, so that the corresponding interpolation parameters are further adjusted by supplementing data in the orthogonal direction, and the image rendering effect is greatly improved by extremely low cost. And each implementation mode of the method can be compounded to the existing image processing algorithm, the computational complexity is not increased basically, the image display effect can be improved greatly, and the method has wide application prospects.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations herein, which should be included in the scope of the present invention.

Claims (13)

1. An image interpolation method, comprising:
according to the rendered pixels in the direction orthogonal to the current sampling direction, obtaining the characteristic values of the rendered pixels;
determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered based on the characteristic values of the rendered pixels and the pixels to be rendered at present, and acquiring orthogonal characteristic information in a direction orthogonal to the current sampling direction; the orthogonal characteristic information is the weighted sum of the characteristic value of the rendered pixel and the pixel to be rendered currently;
performing secondary judgment on the current pixel to be rendered according to the orthogonal characteristic information;
and adjusting interpolation parameters of the current pixel to be rendered according to the secondary judgment result.
2. The image interpolation method of claim 1, wherein a sum of weight values respectively corresponding to the rendered pixel characteristic value and the pixel to be rendered is 1.
3. The image interpolation method according to claim 2, wherein the pixel to be rendered has a weight value of 0.1-0.2.
4. The image interpolation method of claim 1, wherein determining the weight value corresponding to the pixel to be rendered comprises: and according to the change rule of the rendered pixel value, adaptively adjusting a weight value corresponding to the current pixel value to be rendered.
5. The image interpolation method of claim 1, wherein in an initial state, the feature value of the rendered pixel is a pixel value of a first rendered pixel.
6. The image interpolation method according to claim 1, wherein the adjusting the interpolation parameter of the current pixel to be rendered according to the secondary judgment result further comprises: and enlarging interpolation parameters corresponding to the pixel to be rendered.
7. The image interpolation method according to claim 1, wherein the adjusting the interpolation parameter of the current pixel to be rendered according to the secondary judgment result further comprises: and when the pixel to be rendered is judged to be the noise point, eliminating the noise point.
8. An image interpolation apparatus, characterized in that,
the characteristic value calculation module is used for obtaining the characteristic value of the rendered pixel according to the rendered pixel in the direction orthogonal to the current sampling direction;
the weighting calculation module is used for determining weight values respectively corresponding to the characteristic values of the rendered pixels and the pixels to be rendered according to the characteristic values of the rendered pixels and the pixels to be rendered, which are obtained by the characteristic value calculation module, and obtaining orthogonal characteristic information in a direction orthogonal to the current sampling direction; wherein the orthogonal feature information is the weighted sum of the feature value of the rendered pixel and the pixel to be rendered;
and the secondary judgment module is suitable for carrying out secondary judgment on the current pixel to be rendered according to the calculation result of the weighted calculation module.
9. The image interpolation apparatus of claim 8, wherein the weight calculation module further includes setting weight values for the rendered pixel feature values and the pixels to be rendered, respectively.
10. The image interpolation apparatus of claim 8, wherein the weight calculation module further includes adaptively adjusting a weight value corresponding to the pixel value to be rendered according to a law of variation of the rendered pixel value with a position thereof.
11. The image interpolation apparatus of claim 8, wherein the pixel to be rendered has a corresponding weight value of 0.1-0.2.
12. The image interpolation apparatus according to claim 10, further comprising: and the interpolation module is suitable for determining interpolation parameters of the current pixel to be rendered according to the judging result of the secondary judging module.
13. A computer readable storage medium storing program data which, when run, is adapted to perform the method of any one of claims 1-7.
CN202410183980.7A 2024-02-19 2024-02-19 Image interpolation method, apparatus and readable storage medium Pending CN117745531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410183980.7A CN117745531A (en) 2024-02-19 2024-02-19 Image interpolation method, apparatus and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410183980.7A CN117745531A (en) 2024-02-19 2024-02-19 Image interpolation method, apparatus and readable storage medium

Publications (1)

Publication Number Publication Date
CN117745531A true CN117745531A (en) 2024-03-22

Family

ID=90281684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410183980.7A Pending CN117745531A (en) 2024-02-19 2024-02-19 Image interpolation method, apparatus and readable storage medium

Country Status (1)

Country Link
CN (1) CN117745531A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461440A (en) * 2014-12-31 2015-03-25 上海天马有机发光显示技术有限公司 Rendering method, rendering device and display device
US20150235353A1 (en) * 2014-02-19 2015-08-20 Samsung Electronics Co., Ltd. Method and device for processing image data
US20170098432A1 (en) * 2015-10-05 2017-04-06 Lg Display Co., Ltd. Display device and image rendering method thereof
US20190130210A1 (en) * 2017-10-30 2019-05-02 Imagination Technologies Limited Processing a Stream of Pixel Values in an Image Processing System Using Pixel Value Subset Groups
CN109993693A (en) * 2017-12-29 2019-07-09 澜至电子科技(成都)有限公司 Method and apparatus for carrying out interpolation to image
US20200034960A1 (en) * 2017-04-05 2020-01-30 Olympus Corporation Correlation value calculation device
WO2020103036A1 (en) * 2018-11-21 2020-05-28 Boe Technology Group Co., Ltd. A method of real-time image processing based on rendering engine and a display apparatus
CN112508783A (en) * 2020-11-19 2021-03-16 西安全志科技有限公司 Image processing method based on directional interpolation, computer device and computer readable storage medium
CN114331832A (en) * 2021-11-26 2022-04-12 格兰菲智能科技有限公司 Image zooming method and device
WO2023169121A1 (en) * 2022-03-10 2023-09-14 腾讯科技(深圳)有限公司 Image processing method, game rendering method and apparatus, device, program product, and storage medium
WO2024027286A1 (en) * 2022-08-04 2024-02-08 荣耀终端有限公司 Rendering method and apparatus, and device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235353A1 (en) * 2014-02-19 2015-08-20 Samsung Electronics Co., Ltd. Method and device for processing image data
CN104461440A (en) * 2014-12-31 2015-03-25 上海天马有机发光显示技术有限公司 Rendering method, rendering device and display device
US20170098432A1 (en) * 2015-10-05 2017-04-06 Lg Display Co., Ltd. Display device and image rendering method thereof
US20200034960A1 (en) * 2017-04-05 2020-01-30 Olympus Corporation Correlation value calculation device
US20190130210A1 (en) * 2017-10-30 2019-05-02 Imagination Technologies Limited Processing a Stream of Pixel Values in an Image Processing System Using Pixel Value Subset Groups
CN109993693A (en) * 2017-12-29 2019-07-09 澜至电子科技(成都)有限公司 Method and apparatus for carrying out interpolation to image
WO2020103036A1 (en) * 2018-11-21 2020-05-28 Boe Technology Group Co., Ltd. A method of real-time image processing based on rendering engine and a display apparatus
CN112508783A (en) * 2020-11-19 2021-03-16 西安全志科技有限公司 Image processing method based on directional interpolation, computer device and computer readable storage medium
CN114331832A (en) * 2021-11-26 2022-04-12 格兰菲智能科技有限公司 Image zooming method and device
WO2023169121A1 (en) * 2022-03-10 2023-09-14 腾讯科技(深圳)有限公司 Image processing method, game rendering method and apparatus, device, program product, and storage medium
WO2024027286A1 (en) * 2022-08-04 2024-02-08 荣耀终端有限公司 Rendering method and apparatus, and device and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
VLAD-CRISTIAN MICLEA ET AL.: "New sub-pixel interpolation functions for accurate real-time stereo-matching algorithms", 2015 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP), 2 November 2015 (2015-11-02), pages 173 - 178 *
周登文;申晓留: "边导向的双三次彩色图像插值", 自动化学报, no. 004, 31 December 2012 (2012-12-31), pages 525 - 530 *
张严辞;龚昱宁;: "基于实时主方向估计的素描渲染", 重庆理工大学学报(自然科学), no. 09, 15 September 2020 (2020-09-15), pages 225 - 237 *
贾晓芬;郭永存;赵佰亭;黄友锐;: "融合有限差分及梯度的Bayer-CFA插值方法", 西安交通大学学报, no. 05, 20 February 2019 (2019-02-20), pages 138 - 147 *
齐敏;程恭;杜乾敏;朱柏飞;魏效昱;: "一种分区域多方向数据融合图像插值方法", 数据采集与处理, no. 01, 15 January 2016 (2016-01-15), pages 73 - 84 *

Similar Documents

Publication Publication Date Title
US20230214976A1 (en) Image fusion method and apparatus and training method and apparatus for image fusion model
CN107278314B (en) Device, mobile computing platform and method for denoising non-local mean image
US11467661B2 (en) Gaze-point determining method, contrast adjusting method, and contrast adjusting apparatus, virtual reality device and storage medium
EP3895118B1 (en) Systems and methods for noise reduction
WO2020199831A1 (en) Method for training image processing model, image processing method, network device, and storage medium
US20220270345A1 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113095470B (en) Training method, image processing method and device for neural network and storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN113643214A (en) Image exposure correction method and system based on artificial intelligence
CN116863861B (en) Image processing method and device based on non-explicit point judgment and readable storage medium
US20170177977A1 (en) Control of Computer Vision Pre-Processing Based on Image Matching Using Structural Similarity
CN113096023A (en) Neural network training method, image processing method and device, and storage medium
CN107369157A (en) A kind of adaptive threshold Otsu image segmentation method and device
CN112884661A (en) Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method
CN111369435A (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN117745531A (en) Image interpolation method, apparatus and readable storage medium
CN116258653A (en) Low-light level image enhancement method and system based on deep learning
CN115942128A (en) ISP system design and implementation method based on heterogeneous platform
CN113222859B (en) Low-illumination image enhancement system and method based on logarithmic image processing model
CN110766153A (en) Neural network model training method and device and terminal equipment
CN117173056B (en) Image processing method, apparatus and readable storage medium for solving information loss
CN113379608A (en) Image processing method, storage medium and terminal equipment
CN113076966A (en) Image processing method and device, neural network training method and storage medium
WO2023028866A1 (en) Image processing method and apparatus, and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination