CN105447819B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN105447819B
CN105447819B CN201510884587.1A CN201510884587A CN105447819B CN 105447819 B CN105447819 B CN 105447819B CN 201510884587 A CN201510884587 A CN 201510884587A CN 105447819 B CN105447819 B CN 105447819B
Authority
CN
China
Prior art keywords
edge direction
pixel point
interpolation
processed
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510884587.1A
Other languages
Chinese (zh)
Other versions
CN105447819A (en
Inventor
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510884587.1A priority Critical patent/CN105447819B/en
Publication of CN105447819A publication Critical patent/CN105447819A/en
Application granted granted Critical
Publication of CN105447819B publication Critical patent/CN105447819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation

Abstract

The invention discloses an image processing method and device, and belongs to the technical field of image processing. The method comprises the following steps: acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image; determining the position relation between an interpolation pixel point to be processed and an appointed original pixel point according to the position coordinates; calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point; wherein the different positional relationships correspond to different calculation manners. According to the invention, when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
In video calls, a transmitting end typically encodes video data before sending the video data to a receiving end. When the transmitting end performs video coding, the coding resolution is usually low, and the resolution of the display screen of the receiving end is usually higher than the coding resolution, which faces a problem that the receiving end needs to display the low-resolution video data on the high-resolution display screen. To solve this problem, the receiving end may perform an enlargement process on the image. At present, the pixel points of an image are increased mainly by adopting an interpolation algorithm, so that the image is amplified.
However, in the process of amplifying the image, because the gray value between the original pixel point and the interpolation pixel point is discontinuous, a mutation phenomenon exists, so that the amplified image has a fuzzy outline and texture, and the edge generates saw teeth. In order to improve the quality of the amplified image, the prior art generally adopts a bilinear interpolation algorithm to determine the actual pixel value of an interpolation pixel point. In specific implementation, different weighted values can be set for the original pixel points according to the distance between the original pixel points and the interpolation pixel points, and the actual pixel values of the interpolation pixel points are determined by performing weighting operation on the original pixel points so as to remove the sawteeth in the amplified image.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
because each interpolation pixel point is processed in the same processing mode, the processed image has lower definition, and the quality of the processed image is still poorer.
Disclosure of Invention
In order to solve the problems of the related art, embodiments of the present invention provide an image processing method and apparatus. The technical scheme is as follows:
in one aspect, an image processing method is provided, and the method includes:
acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when an appointed original pixel point is amplified by a preset multiple;
determining the position relation between the interpolation pixel point to be processed and the appointed original pixel point according to the position coordinate;
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point;
wherein the different positional relationships correspond to different calculation manners.
In another aspect, there is provided an image processing apparatus, the apparatus including:
the acquisition module is used for acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when the appointed original pixel point is amplified by a preset multiple;
the determining module is used for determining the position relation between the interpolation pixel point to be processed and the appointed original pixel point according to the position coordinate;
the calculation module is used for calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point;
wherein the different positional relationships correspond to different calculation manners.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a framework for magnifying an image based on a network chat system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an original image and a 2-fold enlarged image according to an embodiment of the present invention;
FIG. 3 is a flow chart of an image processing method according to another embodiment of the invention;
FIG. 4 is a flow chart of an image processing method according to another embodiment of the invention;
FIG. 5 is a schematic diagram of four types of interpolation pixel points according to another embodiment of the present invention;
fig. 6 is a schematic diagram of a plurality of original pixels selected by a second calculation method according to another embodiment of the present invention;
FIG. 7 is a diagram illustrating edge directions obtained by a second calculation method according to another embodiment of the present invention;
fig. 8 is a schematic diagram of a plurality of pixels selected by a third computing method according to another embodiment of the present invention;
fig. 9 is a schematic diagram of a plurality of pixels selected by a third computing method according to another embodiment of the present invention;
FIG. 10 is a diagram illustrating a plurality of original pixels selected by a third computing method according to another embodiment of the present invention;
FIG. 11 is an edge direction obtained by a third calculation method according to another embodiment of the present invention;
FIG. 12 is a flow diagram of a framework for an augmentation algorithm in network video chat provided by another embodiment of the present invention;
FIG. 13 is a flow diagram of a framework for an augmentation algorithm in network video chat provided by another embodiment of the present invention;
FIG. 14 is a diagram of the prior art and the present invention with two times larger actual effect according to another embodiment of the present invention;
FIG. 15 is a diagram of the actual effect of another embodiment of the present invention, enlarged twice using the present invention;
FIG. 16 is a diagram of the practical effect of using PhotoZoom to zoom two times, according to another embodiment of the present invention;
FIG. 17 is a diagram of the effect of GIMP using a bicubic interpolation algorithm, magnified by a factor of two, according to another embodiment of the present invention;
FIG. 18 is an original image provided by another embodiment of the present invention;
FIG. 19 is a diagram of the actual effect of the invention of enlarging the original image by two times;
FIG. 20 is a diagram of the practical effect of using PhotoZoom to magnify an original image by two times according to another embodiment of the present invention;
FIG. 21 is a diagram of the actual effect of GIMP using a bicubic interpolation algorithm to double the original image according to another embodiment of the present invention;
fig. 22 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present invention;
fig. 23 is a schematic structural diagram of an image processing terminal according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flow chart of a framework for magnifying an image based on a network video chat system. In a video call scene, when a sending end and a receiving end communicate through a video chat application, the sending end starts a camera, obtains an image through the camera, codes the obtained image, and then sends the coded video data to the receiving end through a network. When the network is in a bad condition, in order to ensure that the video call between the sender and the receiver can be smoothly performed, the sender usually actively decreases the video encoding rate, for example, the video encoding rate of the sender is 320 × 240, 480 × 360, and the like under a normal network bandwidth, and the video encoding rate of the sender is 192 × 144 under an extremely low network bandwidth. After receiving the video data, the receiving end decodes the received video data by adopting the video decoding rate which is the same as the video coding rate. Since the Resolution of the display screen of the receiving end is usually higher than the video encoding rate of the transmitting end, in order to enable the decoded image to be displayed on the display screen, the receiving end amplifies the decoded image for multiple times based on the interpolation algorithm, first, the receiving end amplifies the decoded image by 2 times by using the video enhancement system, so that the number of pixel points on the original image is increased by 4 times, as shown in fig. 2, the left image in fig. 2 is the original image (Low Resolution, LR), the right image in fig. 2 is the image (high Resolution, HR) amplified by 2 times by using the interpolation algorithm, it can be known from fig. 2 that the number of original pixel points is 16, and the number of pixel points on the amplified image is 64; then, the receiving end continuously amplifies the 2-time amplified image by adopting an OpenGL amplification system, and amplifies the 2-time amplified image by 2-6 times; finally, the receiving end displays the enlarged image on a display screen with a resolution of 1920 × 1080.
At present, when an amplified image is amplified, a neighbor interpolation algorithm, a bilinear interpolation algorithm, a bicubic interpolation algorithm and the like are mainly used, and the processing effect of the bicubic interpolation algorithm is the best in the interpolation algorithms.
The interpolation algorithm of the neighbor method is simple and easy to implement, the application range is very wide, and the amplified image has obvious sawtooth edges and mosaic phenomena.
The bilinear interpolation method has a smoothing function, can effectively overcome the defects of a neighbor interpolation algorithm, but can degrade the high-frequency part of an image to make the details of the image blurred. When the magnification factor is higher, the high-order interpolation algorithm, such as a bicubic interpolation algorithm, a cubic spline interpolation algorithm and other bilinear interpolation algorithms, has good effect. The bilinear interpolation algorithm can enable the gray value of an interpolation pixel generated by interpolation and the gray value of an original image to be changed and continuous, so that the gray-shade change of an amplified image is natural and smooth. However, in the amplified image, there still exists a sudden change in the gray value between some interpolation pixel points and the adjacent original pixel points, i.e. the gray value of the interpolation pixel point and the gray value of the original pixel point do not change continuously. These interpolation pixel points with abrupt gray value are edge pixel points of the contour or texture image describing the object in the enlarged image. In the amplified image, these pixel points with discontinuous gray scale characteristics will blur the outline and texture of the amplified image, and the image quality is poor.
The double-cubic interpolation can set different weighted values for the original pixel points according to the distance between the original pixel points and the interpolation pixel points, carry out weighting operation on the original pixel points according to the weighted values of the different original pixel points, and further take the weighting operation result as the actual pixel value of the interpolation pixel points. Although the double cubic interpolation algorithm can remove the saw teeth on the amplified image to a certain extent, the amplified image still has blurry effect and poor image quality, and the watching of a user is influenced.
The above briefly describes several interpolation algorithms for overcoming the problem of image amplification, and these interpolation algorithms are based on a two-step method for processing the image. Firstly, processing interpolation pixel points positioned on the upper left corner and the lower right corner of an amplified image; and secondly, processing the interpolation pixel points on the upper right corner or the lower left corner of the amplified image after the first step of processing. The processing mode can not perform parallel operation on the amplified images, double cache is required, the timeliness of the amplified images is poor, and the method is not suitable for mobile equipment with high real-time requirement; in addition, the Processing method does not or less consider the compromise between real-time performance and effect, and also does not consider the compromise of Processing capability of different GPUs (Graphics Processing units) so as to obtain the best effect of the type of the machine with different performances.
The embodiment of the invention provides an image processing method, which comprehensively considers the position relationship between an interpolation pixel point and an original pixel point, calculates the actual pixel value of the interpolation pixel point by adopting different calculation modes based on the position relationship between the interpolation pixel point and the original pixel point, and improves the definition of an image while removing sawteeth on an amplified image. Referring to fig. 3, the method flow provided by this embodiment includes:
301. the method comprises the steps of obtaining position coordinates of any interpolation pixel point to be processed on an amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when an appointed original pixel point is amplified by a preset multiple.
302. And determining the position relation between the interpolation pixel point to be processed and the appointed original pixel point according to the position coordinates.
303. And calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point.
Wherein the different positional relationships correspond to different calculation manners.
According to the method provided by the embodiment of the invention, when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.
In another embodiment of the present invention, calculating an actual pixel value of an interpolation pixel point to be processed by using a calculation method corresponding to a position relationship according to the position relationship between the interpolation pixel point to be processed and a designated original pixel point, includes:
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode;
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode;
and if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode.
In another embodiment of the present invention, if the interpolation pixel to be processed is located in the first position region of the designated original pixel, calculating the actual pixel value of the interpolation pixel to be processed by using a first calculation method, including:
acquiring a pixel value of an original pixel point closest to the interpolation pixel point to be processed, and taking the pixel value of the original pixel point as an actual pixel value of the interpolation pixel point to be processed; or;
and acquiring pixel values of two original pixel points which are closest to the interpolation pixel point to be processed, and performing interpolation calculation on the pixel values of the two original pixel points by adopting a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
In another embodiment of the present invention, if the interpolation pixel to be processed is located in the second position region of the designated original pixel, calculating the actual pixel value of the interpolation pixel to be processed by using a second calculation method, including:
acquiring a first edge direction and a second edge direction of an interpolation pixel point to be processed;
selecting a plurality of original pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center;
determining the strength values of interpolation pixel points to be processed in a first edge direction and a second edge direction according to a plurality of original pixel points;
judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction;
if the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from a plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the first strong edge direction, the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction are obtained from the plurality of original pixel points, interpolation calculation is respectively carried out on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
In another embodiment of the present invention, determining the strength values of the interpolation pixel to be processed in the first edge direction and the second edge direction according to the plurality of original pixels includes:
respectively obtaining color values of two adjacent original pixel points in a first edge direction and a second edge direction;
obtaining a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
acquiring an absolute value of a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the first edge direction as the intensity value in the first edge direction, and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the second edge direction as the intensity value in the second edge direction.
In another embodiment of the present invention, determining whether the interpolation pixel point to be processed has a first strong edge direction according to the intensity values of the interpolation pixel point to be processed in the first edge direction and the second edge direction includes:
obtaining the ratio of the intensity values in the first edge direction and the second edge direction;
and judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the ratio of the intensity values in the first edge direction and the second edge direction.
In another embodiment of the present invention, the ratio of the obtained intensity values in the first edge direction and the second edge direction is:
R1=(1+G1)/(1+G2);
wherein R is1Is the ratio of the intensity values in the first edge direction and the second edge direction, G1Is the intensity value in the first edge direction, G2Is the intensity value in the direction of the second edge.
In another embodiment of the present invention, determining whether the interpolation pixel to be processed has the first strong edge direction according to the ratio of the intensity values in the first edge direction and the second edge direction includes:
if the ratio of the intensity values in the first edge direction and the second edge direction is not equal to a first preset threshold value, determining that a first strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio of the intensity values in the first edge direction and the second edge direction is equal to a first preset threshold value, determining that the interpolation pixel point to be processed does not have a first strong edge direction.
In another embodiment of the present invention, a weighting operation is performed on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the obtained actual pixel value of the interpolation pixel point to be processed is:
P1=(w1*value1+w2*value2)/(w1+w2);
Figure BDA0000867972030000091
Figure BDA0000867972030000092
wherein, P1Is the actual pixel value, w, of the interpolated pixel point to be processed1A weight value corresponding to the result of the interpolation in the main direction of the first edge direction1W2 is the weight value corresponding to the interpolation result in the main direction of the first edge direction2K is a specified parameter, which is the interpolation calculation result in the principal direction of the second edge direction.
In another embodiment of the present invention, if the interpolation pixel to be processed is located in the third position region of the designated original pixel, calculating the actual pixel value of the interpolation pixel to be processed by using a third calculation method, including:
acquiring a third edge direction and a fourth edge direction of an interpolation pixel point to be processed;
selecting a plurality of pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center, wherein the pixel points comprise an original pixel point and an interpolation pixel point;
determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points;
judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction;
if the interpolation pixel point to be processed has a second strong edge direction, acquiring a pixel point in the second strong edge direction according to a plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the second strong edge direction, the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction are obtained according to the plurality of pixel points, interpolation calculation is respectively carried out on the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction, interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
In another embodiment of the present invention, determining the intensity values of the interpolation pixel to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points includes:
respectively obtaining color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining interpolation values between color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining an absolute value of interpolation between color values of two adjacent original pixel points and an interpolation pixel point in a third edge direction and a fourth edge direction;
and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction as the intensity value in the third edge direction, and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the fourth edge direction as the intensity value in the fourth edge direction.
In another embodiment of the present invention, determining whether the interpolation pixel point to be processed has a second strong edge direction according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction includes:
obtaining the ratio of the intensity values in the third edge direction and the fourth edge direction;
and judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the ratio of the intensity values in the third edge direction and the fourth edge direction.
In another embodiment of the present invention, the ratio of the obtained intensity values in the third edge direction and the fourth edge direction is:
R2=(1+G3)/(1+G4);
wherein R is2Is the ratio of the intensity values in the third edge direction and the fourth edge direction, G3As an intensity value in the direction of the third edge, G4Is the intensity value in the direction of the fourth edge.
In another embodiment of the present invention, determining whether the interpolation pixel point to be processed has the second strong edge direction according to the ratio of the intensity values in the third edge direction and the fourth edge direction includes:
if the ratio of the intensity values in the third edge direction and the fourth edge direction is not equal to a second preset threshold value, determining that a second strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio of the intensity values in the third edge direction and the fourth edge direction is equal to a second preset threshold value, determining that the interpolation pixel point to be processed does not have a second strong edge direction.
In another embodiment of the present invention, obtaining a pixel point in the second strong edge direction according to a plurality of pixel points includes:
if the original pixel points exist in the second strong edge direction of the plurality of pixel points, the original pixel points in the second strong edge direction are obtained;
and if the original pixel points do not exist in the second strong edge direction of the plurality of pixel points, obtaining the interpolation pixel points in the second strong edge direction.
In another embodiment of the present invention, a weighting operation is performed on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the obtained actual pixel value of the interpolation pixel point to be processed is:
P2=(w3*value3+w4*value4)/(w3+w4);
Figure BDA0000867972030000121
wherein, P2Is the actual pixel value, w, of the interpolated pixel point to be processed3A weight value corresponding to the interpolation calculation result in the main direction of the third edge direction3For the interpolation result in the main direction of the third edge direction, w4A weight value corresponding to the interpolation calculation result in the main direction of the fourth edge direction4And k is a specified parameter, and is an interpolation calculation result in the main direction of the fourth edge direction.
In another embodiment of the present invention, the first position region is an upper left corner of the designated original pixel point, the second position region is a lower right corner of the designated original pixel point, and the third position region is an upper right corner or a lower left corner of the designated original pixel point.
In another embodiment of the present invention, the first edge direction is a 45 degree direction, the second edge direction is a 135 degree direction, the third edge direction is a 0 degree direction, and the fourth edge direction is a 90 degree direction.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
An embodiment of the present invention provides an image processing method, and referring to fig. 4, a flow of the method provided in this embodiment includes:
401. and in the process of carrying out video call with the sending end, the receiving end carries out amplification processing on the original image.
In this embodiment, a video chat application is installed in the receiving end, and based on the installed video chat application, the receiving end may receive video data sent by the sending end, decode the received video data, and display the decoded video data, that is, the original image in this embodiment, on the display screen in real time. The receiving end may be a smart phone, a notebook computer, a desktop computer, a tablet computer, or the like, and the embodiment does not specifically limit the type of the receiving end.
Because the video encoding rate of the video data sent by the sending end is low, and the resolution of the display screen of the receiving end is high, in order to display the image with low resolution on the display screen with high resolution, the receiving end adopts an interpolation algorithm to amplify the original image to obtain an amplified image.
Due to the fact that the magnification ratio of the amplified image is high, the saw teeth on the amplified image are obvious, and the image quality is poor. In order to improve the quality of the enlarged image, the present embodiment will process the enlarged image using the following steps 402 to 404.
402. The receiving end obtains the position coordinates of any interpolation pixel point to be processed on the amplified image.
The enlarged image is an image obtained by enlarging an original pixel point on the original image by a preset multiple by using an interpolation algorithm, where the preset multiple may be 2 times, 4 times, 8 times, and the like, and in this embodiment, the preset multiple is 2 times, that is, the image after 2 times of enlargement is processed as an example. For any appointed pixel point on the original image, the interpolation pixel point to be processed is the pixel point generated when the appointed original pixel point is amplified by 2 times.
When the receiving end acquires the position coordinate of any interpolation pixel point to be processed on the amplified image, a rectangular coordinate system can be constructed by taking the length of the amplified image as an X axis and the width of the amplified image as a Y axis based on the amplified image, and the position coordinate of the interpolation pixel point to be processed in the constructed rectangular coordinate system can be acquired. Of course, the receiving end may also construct a rectangular coordinate system based on each original pixel point, with each original pixel point as an origin, with the right direction as the positive direction of the X axis, with the upward direction as the positive direction of the Y axis, and obtain the position coordinates of the interpolation pixel points to be processed in the constructed rectangular coordinate system.
403. And the receiving end determines the position relation between the interpolation pixel point to be processed and the designated original pixel point according to the position coordinates.
In the actual processing process, the actual pixel values of the interpolation pixel points obtained after the original pixel points are amplified by 2 times are different, and in order to accurately calculate the actual pixel value of each interpolation pixel point, the interpolation pixel points generated after the original pixel points are amplified by 2 times are divided into four types in the embodiment. As shown in fig. 5, the left black ball represents the designated original pixel, and the right four white balls represent the interpolated pixel generated by amplifying the designated original pixel by 2 times. As can be seen from fig. 5, the positional relationship between the interpolation pixel point and the designated original pixel point is that the interpolation pixel point is located at the upper left corner of the designated original pixel point, the interpolation pixel point is located at the lower right corner of the designated original pixel point, the interpolation pixel point is located at the upper right corner of the designated original pixel point, the interpolation pixel point is located at the lower left corner of the designated original pixel point, and the like.
In this embodiment, the position relationship between the to-be-processed interpolation pixel point and the designated original pixel point includes that the to-be-processed interpolation pixel point is located in a first position region of the designated original pixel point, the to-be-processed interpolation pixel point is located in a second position region of the designated original pixel point, the to-be-processed interpolation pixel point is located in a third position region of the designated original pixel point, and the like. Based on the above position relationship between the interpolation pixel point and the designated original pixel point in fig. 5, in this embodiment, the first position region may be the upper left corner of the designated original pixel point, the second position region may be the lower right corner of the designated original pixel point, and the third position region may be the upper right corner or the lower left corner of the designated original pixel point.
In this embodiment, for two construction methods of the rectangular coordinate system, when the receiving end determines the position relationship between the interpolation pixel point to be processed and the designated original pixel point according to the position coordinate of the interpolation pixel point to be processed, the following two methods may be adopted:
the first mode is as follows: if a rectangular coordinate system is constructed by taking the length of the amplified image as an X axis and the width of the amplified image as a Y axis, the position relationship between the interpolation pixel point to be processed and the designated original pixel point can be determined directly according to the position coordinates of the interpolation pixel point to be processed and the designated original pixel point.
For example, in a rectangular coordinate system in which the length of the enlarged image is taken as the X axis and the width of the enlarged image is taken as the Y axis, the position coordinates at which the designated original pixel point is acquired are set to (2.3, 1.8). If the position coordinates of the interpolation pixel points to be processed are (2, 2), determining that the interpolation pixel points to be processed are positioned at the upper left corner of the appointed original pixel points, namely a first position area, according to the position coordinates of the interpolation pixel points to be processed and the appointed original pixel points; if the position coordinates of the interpolation pixel points to be processed are (2.5, 2), determining that the interpolation pixel points to be processed are positioned at the upper right corner of the appointed original pixel points, namely a third position area, according to the position coordinates of the interpolation pixel points to be processed and the appointed original pixel points; if the position coordinate of the interpolation pixel point to be processed is (2, 1), determining that the interpolation pixel point is positioned at the lower left corner of the appointed original pixel point, namely a third position area, according to the position coordinate between the interpolation pixel point to be processed and the appointed original pixel point; if the position coordinate of the interpolation pixel point to be processed is (2.5, 1), the interpolation pixel point can be determined to be positioned at the lower right corner of the appointed original pixel point, namely the second position area, according to the position coordinate between the interpolation pixel point to be processed and the appointed original pixel point.
The second mode is as follows: and if the rectangular coordinate system is constructed by taking the appointed original pixel points as the original points, determining the position relation between the interpolation pixel points to be processed and the appointed original pixel points according to the quadrant of the interpolation pixel points to be processed in the rectangular coordinate system. When the interpolation pixel point to be processed is positioned in the first quadrant, determining that the interpolation pixel point to be processed is positioned in a third position area of the appointed original pixel point; when the interpolation pixel point to be processed is positioned in the second quadrant, determining that the interpolation pixel point to be processed is positioned in the first position area of the appointed original pixel point; when the interpolation pixel point to be processed is positioned in the third quadrant, determining that the interpolation pixel point to be processed is positioned in the third position area of the appointed original pixel point; and when the interpolation pixel point to be processed is positioned in the fourth quadrant, determining that the interpolation pixel point to be processed is positioned in the second position area of the appointed original pixel point.
For example, in a rectangular coordinate system constructed with the designated original pixel point as the origin, the position coordinates of the designated original pixel point are (0, 0). If the position coordinate of the interpolation pixel point to be processed is obtained as (-1, 1), the interpolation pixel point to be processed is positioned in the second quadrant, so that the interpolation pixel to be processed can be determined to be positioned at the upper left corner of the appointed original pixel point, namely a first position area; if the position coordinate of the interpolation pixel point to be processed is (1, 1), the interpolation pixel point to be processed can be determined to be positioned at the upper right corner of the appointed original pixel point, namely a third position area, because the interpolation pixel point to be processed is positioned in the first quadrant; if the position coordinate of the interpolation pixel point to be processed is obtained as (-1, -1), the interpolation pixel point to be processed is positioned in the third quadrant, so that the interpolation pixel point to be processed can be determined to be positioned in the lower left corner of the appointed original pixel point, namely the third position area; if the position coordinate of the interpolation pixel point to be processed is obtained as (1, -1), the interpolation pixel point to be processed is located in the fourth quadrant, and the interpolation pixel point to be processed can be determined to be located in the lower right corner of the appointed original pixel point, namely the second position area.
404. And the receiving end calculates the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point.
Because the properties of the interpolation pixel points located in different position areas of the designated original pixel point are different, in order to improve the accuracy of the pixel value of the interpolation pixel point to be processed, the method provided by the embodiment calculates the actual pixel value of the interpolation pixel point to be processed by adopting different calculation methods according to the position relationship between the interpolation pixel point to be processed and the designated original pixel point. In the specific calculation, see steps 4041-4043:
4041. and if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, the receiving end calculates the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode.
4042. And if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, the receiving end calculates the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode.
4043. And if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, the receiving end calculates the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode.
The three cases 4041 to 4043 will be described below.
4041. When the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, the receiving end can obtain the pixel value of the original pixel point which is closest to the interpolation pixel point to be processed, and the pixel value of the original pixel point is used as the actual pixel value of the interpolation pixel point to be processed. For example, 4 original pixels are arranged around the interpolation pixel to be processed, which are an original pixel a, an original pixel B, an original pixel C, and an original pixel D, respectively, where a distance between the original pixel a and the interpolation pixel to be processed is 0.5, a distance between the original pixel B and the interpolation pixel to be processed is 1, a distance between the original pixel C and the interpolation pixel to be processed is 0.25, and a distance between the original pixel D and the interpolation pixel to be processed is 1.25, and since the distance between the original pixel C and the interpolation pixel to be processed is the closest, the receiving end can use the pixel value of the original pixel C as the actual pixel value of the interpolation pixel to be processed.
Besides the above method, the receiving end can also obtain the pixel values of two original pixel points closest to the interpolation pixel point to be processed, and perform interpolation calculation on the pixel values of the two original pixel points by using a bilinear interpolation algorithm to obtain the actual pixel value of the interpolation pixel point to be processed. For example, if the receiving end obtains that the pixel values of two original pixel points closest to the interpolation pixel point to be processed are 124 and 144, respectively, and performs interpolation calculation on the two original pixel points by using a bilinear interpolation algorithm, and the obtained pixel value is 130, it may be determined that the actual pixel value of the interpolation pixel point to be processed is 130.
4042. When the interpolation pixel point to be processed is located in the second position area of the designated original pixel point, the receiving end can calculate the actual pixel value of the interpolation pixel point to be processed by adopting the following steps 40421-40426:
40421. the receiving end obtains a first edge direction and a second edge direction of an interpolation pixel point to be processed.
In order to improve the quality of the amplified image and improve the definition of the amplified image, the method provided by this embodiment considers the directions of the interpolation pixel point and the original pixel point when calculating the actual pixel value of the difference pixel, and selects the edge direction of the interpolation pixel point based on the direction between the interpolation pixel point and the original pixel point. The number of interpolation pixel points and the data processing capability of the receiving end are limited, and this embodiment usually selects only two orthogonal edge directions of the interpolation pixel points to be processed, and does not select all edge directions of the interpolation pixel points to be processed.
When the interpolation pixel point to be processed is located in the second position region of the designated original pixel point, the interpolation pixel point to be processed is located in the first edge direction and the second edge direction of the designated original pixel point, and only the gradients in the first edge direction and the second edge direction are calculated in this embodiment. The first edge direction may be a 30 degree direction, a 45 degree direction, etc., and the second edge direction may be a 120 degree direction, a 135 degree direction, etc., perpendicular to the first edge direction, and the present embodiment will be described by taking the first edge direction as the 45 degree direction and the second edge direction as the 135 degree direction as an example.
It should be noted that the first edge direction is a set of parallel lines forming an angle of 45 degrees with the horizontal direction, and the second edge direction is a set of parallel lines forming an angle of 135 degrees with the horizontal direction, in this embodiment, the first edge direction passing through the to-be-processed interpolation pixel is referred to as a main direction of the first edge direction, and the second edge direction passing through the to-be-processed interpolation pixel is referred to as a main direction of the second edge direction.
40422. The receiving end selects a plurality of original pixel points on the enlarged image by taking the interpolation pixel points to be processed as the center.
The number of the original pixels selected on the enlarged image may be 10, 20, 30, etc., and in this embodiment, it is preferable to select 16 pixels. After selecting a plurality of original pixel points on the enlarged image, the receiving end also numbers the plurality of original pixel points around the interpolation pixel point to be processed in sequence from top to bottom and from left to right. Referring to fig. 6, a gray sphere is an interpolation pixel to be processed, a black sphere is an original pixel, and a white sphere is an interpolation pixel. The receiving end followsThe 16 selected original pixel points are numbered sequentially from top to bottom and from left to right, namely 0, 1, 2, 14 and 15 (hereinafter, p is used for p)0、p1、p2、…、p15Representation).
Based on the plurality of original pixel points shown in fig. 6, the direction of the connection line of the No. 3 black ball, the No. 6 black ball, the No. 9 black ball, and the No. 12 black ball is the main direction of the first edge direction, and the direction of the connection line of the No. 0 black ball, the No. 5 black ball, the interpolation pixel point to be processed, the No. 10 black ball, and the No. 15 black ball is the main direction of the second edge direction.
40423. And the receiving terminal determines the strength values of the interpolation pixel points to be processed in the first edge direction and the second edge direction according to the plurality of original pixel points.
In order to make the calculated intensity values in the first edge direction and the second edge direction more accurate, when the receiving end calculates the intensity value of the interpolation pixel point to be processed in the first edge direction according to the plurality of original pixel points, the receiving end does not only calculate according to the original pixel point in the main direction of the first edge direction, but also calculates according to other original pixel points in the first edge direction; similarly, when the receiving end calculates the intensity value of the interpolation pixel point to be processed in the second edge direction according to the plurality of original pixel points, the receiving end does not only perform calculation according to the original pixel point located in the main direction of the second edge direction, but also performs calculation according to other original pixel points located in the second edge direction. The specific calculation process can be seen in the following steps (a) to (d):
(a) and the receiving end respectively acquires the color values of two adjacent original pixel points in the first edge direction and the second edge direction.
Two original pixel points adjacent to each other in the first edge direction are two original pixel points closest to each other in the first edge direction, and referring to fig. 7, the two original pixel points adjacent to each other in the first edge direction include a number 1 original pixel point and a number 4 original pixel point, a number 2 original pixel point and a number 5 original pixel point, a number 5 original pixel point and a number 8 original pixel point, a number 3 original pixel point and a number 6 original pixel point, a number 6 original pixel point and a number 9 original pixel point, a number 9 original pixel point and a number 12 original pixel point, a number 7 original pixel point and a number 10 original pixel point, a number 10 original pixel point and a number 13 original pixel point, a number 11 original pixel point and a number 14 original pixel point; two original pixel points adjacent to each other in the second edge direction are two original pixel points closest to each other in the second edge direction, and referring to fig. 7, the two adjacent original pixel points in the second edge direction include No. 2 original pixel point and No. 7 original pixel point, No. 1 original pixel point and No. 6 original pixel point, No. 6 original pixel point and No. 11 original pixel point, No. 0 original pixel point and No. 5 original pixel point, No. 5 original pixel point and No. 10 original pixel point, No. 10 original pixel point and No. 15 original pixel point, No. 4 original pixel point and No. 9 original pixel point, No. 9 original pixel point and No. 14 original pixel point, No. 8 original pixel point and No. 13 original pixel point.
When the receiving end obtains the color values of two adjacent original pixel points in the first edge direction and the second edge direction, the color values can be obtained based on the color code table. Taking the original pixel point No. 0 as an example, if the color code of the obtained original pixel point No. 0 is # FFB6C1, it can be known that the color values of the three colors of red, green and blue in the original pixel point No. 0 are 255, 182 and 193 respectively; if the color code of the obtained 0 original pixel point is # FFBOCB, it can be known that the color values of the three colors of red, green and blue in the 0 original pixel point are 255, 192 and 203, respectively.
(b) And the receiving end acquires the difference value between the color values of two adjacent original pixel points in the first edge direction and the second edge direction.
After the color values of the two adjacent original pixel points in the first edge direction and the second edge direction are obtained, the receiving end subtracts the color values of the two adjacent original pixel points in the first edge direction to obtain a difference value between the color values of the two adjacent original pixel points in the first edge direction, and meanwhile, the receiving end subtracts the color values of the two adjacent original pixel points in the second edge direction to obtain a difference value between the color values of the two adjacent original pixel points in the second edge direction.
Referring to fig. 7, an original pixel point 1 and an original pixel point 4 are two adjacent original pixel point pixels in a first edge direction, a color value of the original pixel point 1 is set to be (255, 240, 245), a color value of the original pixel point 4 is set to be (219, 112, 147), when a receiving end obtains a difference value between the color values of the original pixel point 1 and the original pixel point 4, a red color value in the original pixel point 1 and a red color value in the original pixel point 4 can be subtracted to obtain 255-, 128, 98).
(c) And the receiving end acquires the absolute value of the difference value between the color values of two adjacent original pixel points in the first edge direction and the second edge direction.
The purpose of obtaining the difference value between the color values of the two adjacent original pixel points in the first edge direction and the second edge direction is to obtain the gradient of the color values of the two adjacent original pixel points in the first edge direction and the second edge direction, and the color values of the two adjacent original pixel points in the first edge direction and the second edge direction may have a negative number in the actual calculation.
For the absolute value of the difference between the color values of two adjacent original pixels in the first edge direction, see the left diagram in fig. 7, where the absolute value of the difference between the color values of original pixel 1 and original pixel 4 is a11, the absolute value of the difference between the color values of original pixel 2 and original pixel 5 is b22, the absolute value of the difference between the color values of original pixel 5 and original pixel 8 is b21, the absolute value of the difference between the color values of original pixel 3 and original pixel 6 is c33, the absolute value of the difference between the color values of original pixel 6 and original pixel 9 is c32, the absolute value of the difference between the color values of original pixel 9 and original pixel 12 is c31, the absolute value of the difference between the color values of original pixel 7 and original pixel 10 is d42, and the absolute value of the difference between the color values of original pixel 10 and original pixel 13 is d41, the absolute value of the difference between the color values of the original pixel 11 and the original pixel 14 is e 51.
For the absolute value of the difference between the color values of two adjacent original pixels in the second edge direction, see the right diagram in fig. 7, where the absolute value of the difference between the color values of original pixel 2 and original pixel 7 is a1, the absolute value of the difference between the color values of original pixel 1 and original pixel 6 is b2, the absolute value of the difference between the color values of original pixel 6 and original pixel 11 is b1, the absolute value of the difference between the color values of original pixel 0 and original pixel 5 is c3, the absolute value of the difference between the color values of original pixel 5 and original pixel 10 is c2, the absolute value of the difference between the color values of original pixel 10 and original pixel 15 is c1, the absolute value of the difference between the color values of original pixel 4 and original pixel 9 is d2, the absolute value of the difference between the color values of original pixel 9 and original pixel 14 is d1, the absolute value of the difference between the color values of original pixel 8 and original pixel 13 is e 1.
(d) And the receiving end takes the sum of the absolute values of the difference values between the color values of two adjacent original pixel points in the first edge direction as the intensity value in the first edge direction, and takes the sum of the absolute values of the difference values between the color values of two adjacent original pixel points in the second edge direction as the intensity value in the second edge direction.
Based on the absolute value of the difference between the color values of the two adjacent original pixels in the first edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixels in the first edge direction as the intensity value in the first edge direction. Referring to the left image in fig. 6, the receiving end may obtain the intensity value in the first edge direction by adding the difference between the color values of two adjacent original pixel points in the first edge direction:
G1=a11+b21+b22+c31+c32+c33+d41+d42+e51
that is, the intensity value of the interpolation pixel point to be processed in the 45-degree direction is:
G1=a11+b21+b22+c31+c32+c33+d41+d42+e51。
based on the absolute value of the difference between the color values of the two adjacent original pixel points in the second edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixel points in the second edge direction as the intensity value in the second edge direction. Referring to the right diagram in fig. 6, by adding the difference between the color values of two adjacent original pixel points in the second edge direction, the intensity value in the second edge direction can be obtained as follows:
G2=a1+b1+b2+c1+c2+c3+d1+d2+e1
that is, the intensity value of the interpolation pixel point to be processed in the 135 degree direction is:
G2=a1+b1+b2+c1+c2+c3+d1+d2+e1。
40424. and the receiving terminal judges whether the interpolation pixel point to be processed has a first strong edge direction according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction.
The first strong edge direction is a direction in which the intensity value changes little, and is generally a main direction of the first edge direction or a main direction of the second edge direction. When the receiving end judges whether the interpolation pixel point to be processed has the first strong edge direction according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction, the following steps (a) to (b) can be adopted:
(a) and the receiving end acquires the ratio of the intensity values in the first edge direction and the second edge direction.
In this embodiment, the ratio of the intensity values in the first edge direction and the second edge direction obtained by the receiving end is:
R1=(1+G1)/(1+G2);
wherein R is1Is the ratio of the intensity values in the first edge direction and the second edge direction, G1Is the intensity value in the first edge direction, G2Is the intensity value in the direction of the second edge.
It should be noted that, in general, when obtaining the intensity values in the first edge direction and the second edge direction, the intensity values in the first edge direction and the second edge direction may be directly compared, but in this embodiment, 1 is added to the intensity values in the first edge direction and the second edge direction, so that mainly in order to prevent the intensity value in the first edge direction or the intensity value in the second edge direction from being 0, the ratio of the obtained intensity values in the first edge direction and the second edge direction is 0.
(b) And the receiving end judges whether the interpolation pixel point to be processed has a first strong edge direction according to the ratio of the intensity values in the first edge direction and the second edge direction.
When the receiving end judges whether the interpolation pixel point to be processed has the first strong edge direction according to the ratio of the intensity values in the first edge direction and the second edge direction, the method includes but is not limited to the following conditions:
if the ratio of the intensity values in the first edge direction and the second edge direction is greater than a first preset threshold value, determining that a first strong edge direction exists in the interpolation pixel point to be processed, and the main direction of the second edge direction is the first strong edge direction;
if the ratio of the intensity values in the first edge direction and the second edge direction is smaller than a first preset threshold value, determining that a first strong edge direction exists in the interpolation pixel point to be processed, and the main direction of the first edge direction is the first strong edge direction;
and if the ratio of the intensity values in the first edge direction and the second edge direction is equal to a first preset threshold value, determining that the interpolation pixel point to be processed does not have the first strong edge direction.
The first preset threshold may be 1, 1.5, 1.7, etc., and the first preset threshold is set to be 1.15 in this embodiment.
40425. If the interpolation pixel point to be processed has the first strong edge direction, the receiving end obtains the original pixel point positioned in the first strong edge direction from the plurality of original pixel points, and carries out interpolation calculation on the original pixel point in the first strong edge direction to obtain the actual pixel value of the interpolation pixel point to be processed.
Based on the determined first strong edge direction, the receiving end acquires original pixel points located in the first strong edge direction from the plurality of original pixel points, calculates interpolation coefficients corresponding to the original pixel points in the first strong edge direction by adopting a double cubic interpolation function, and then performs weighted calculation on the pixel values of the original pixel points in the first strong edge direction and the corresponding interpolation coefficients to obtain actual pixel values of the interpolation pixel points to be processed. The interpolation coefficient determined by the bicubic interpolation function can be expressed by the following formula (1):
Figure BDA0000867972030000231
wherein, a is the distance between two adjacent pixel points, and s is the distance between the original pixel point and the interpolation pixel point to be processed. In this embodiment, the distance between two adjacent pixels is set to 1/2, that is, a is 1/2, and the interpolation coefficient determined by the bicubic interpolation function at this time can be represented by the following formula (2):
Figure BDA0000867972030000232
when the first strong edge direction is the main direction of the first edge direction, referring to the left diagram in fig. 7, the original pixels in the main direction of the first edge direction are original pixel 3, original pixel 6, original pixel 9, and original pixel 12. As can be seen from the left diagram in fig. 7, the distance s between the original pixel point 3 and the interpolation pixel point to be processed is 3/2, and the above formula (2) should be adopted
Figure BDA0000867972030000233
Calculation is carried out by substituting s-3/2 into
Figure BDA0000867972030000234
In the method, 3 pairs of original pixels can be obtainedThe corresponding interpolation coefficient is-1/16; the distance between the original pixel 6 and the interpolation pixel to be processed is 1/2, which should be the same as that in the above formula (2)
Figure BDA0000867972030000235
The calculation is carried out by substituting s-1/2 into the formula
Figure BDA0000867972030000236
In the method, the interpolation coefficient corresponding to the original pixel point 6 is obtained as 9/16; since the distance between the original pixel point 9 and the interpolation pixel point to be processed is 1/2, the above formula (2) should be adopted
Figure BDA0000867972030000237
At this time, s is 1/2 substituted into the formula
Figure BDA0000867972030000238
In the interpolation calculation, the interpolation coefficient corresponding to the original pixel point 9 is 9/16; since the distance between the original pixel 12 and the interpolation pixel to be processed is 3/2, the above formula (2) should be adopted
Figure BDA0000867972030000239
The calculation is carried out by substituting s-3/2 into the formula
Figure BDA0000867972030000241
In this way, the interpolation coefficient corresponding to the original pixel 12 is obtained as-1/16. Based on the determined interpolation coefficients corresponding to the original pixel points 3, 6, 9 and 12, the receiving end may obtain that the actual pixel value of the interpolated pixel point to be processed is value1 ═ 1/16 × p3+9/16 × p6+9/16 × p9-1/16 × p 12.
When the first strong edge direction is the main direction of the second edge direction, referring to the right diagram in fig. 7, the original pixels in the main direction of the second edge direction are original pixel 0, original pixel 5, original pixel 10, and original pixel 15. As can be seen from the right diagram in fig. 7, the distance s between the original pixel point 0 and the interpolation pixel point to be processed is 3/2, and the above should be adoptedCalculating in formula (2), and substituting s-3/2 into the formula, so as to obtain an interpolation coefficient corresponding to the original pixel point 0, which is-1/16; the distance between the original pixel point 5 and the interpolation pixel point to be processed is 1/2, the calculation in the formula (2) is adopted, at this time, s is 1/2 and is substituted into the formula, and the interpolation coefficient corresponding to the original pixel point 5 is 9/16; since the distance between the original pixel 10 and the interpolation pixel to be processed is 1/2, the above formula (2) should be used
Figure BDA0000867972030000246
At this time, s is 1/2 substituted into the formulaIn the method, an interpolation coefficient corresponding to the original pixel point 10 during interpolation calculation is obtained as 9/16; since the distance between the original pixel 15 and the interpolation pixel to be processed is 3/2, the above formula (2) should be used
Figure BDA0000867972030000248
At this time, s is 3/2 substituted into the formula
Figure BDA0000867972030000249
In this way, the interpolation coefficient corresponding to the original pixel 15 is obtained as-1/16. Based on the determined interpolation coefficients corresponding to the original pixel points 0, 5, 10 and 15, the receiving end may obtain that the actual pixel value of the interpolated pixel point to be processed is value2 ═ 1/16 × p0+9/16 × p5+9/16 × p10-1/16 × p 15.
40426. If the interpolation pixel point to be processed does not have the first strong edge direction, the receiving end obtains the original pixel points positioned in the main direction of the first edge direction and the main direction of the second edge direction from the plurality of original pixel points, carries out interpolation calculation on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction respectively, obtains the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, carries out weighting operation on the interpolation calculation results in the first edge direction and the second edge direction, and obtains the actual pixel value of the interpolation pixel point to be processed.
For the process of obtaining the original pixel points located in the main direction of the first edge direction and the main direction of the second edge direction and performing interpolation calculation on the original pixel points located in the main direction of the first edge direction and the main direction of the second edge direction, refer to step 40425 above, which is not described herein again.
Based on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, the receiving end performs weighting operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the obtained actual pixel value of the interpolation pixel point to be processed is as follows:
P1=(w1*value1+w2*value2)/(w1+w2)
Figure BDA0000867972030000251
Figure BDA0000867972030000252
wherein, P1Is the actual pixel value, w, of the interpolated pixel point to be processed1A weight value corresponding to the result of the interpolation in the main direction of the first edge direction1W2 is the weight value corresponding to the interpolation result in the main direction of the first edge direction2The interpolation calculation result in the main direction of the second edge direction, k is a designated parameter, and can be obtained through testing, and the value of k is set to be 5 in this embodiment.
It should be noted that, when the actual pixel value of the interpolation pixel to be processed is obtained, the actual pixel value is w1、w2The addition of 1 to the denominator of (a) is to prevent the denominator from being 0.
4043 when the to-be-processed interpolation pixel point is located in the third position region of the designated original pixel point, the receiving end may calculate the actual pixel value of the to-be-processed interpolation pixel point by using the following steps 40431-40436:
40431. and the receiving end acquires the third edge direction and the fourth edge direction of the interpolation pixel point to be processed.
Because the lower left corner and the upper right corner of the designated original pixel point in the third position region and the interpolation pixel points on the third position region are mutually positioned on the diagonal line of the opposite side, in order to prevent the interpolation pixel points on the third position region from generating a dependency relationship with each other, when the interpolation pixel points to be processed are positioned in the third position region of the designated original pixel points, only the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction are calculated in the embodiment. The third edge direction may be a 0 degree direction, a 10 degree direction, etc., and the fourth edge direction is perpendicular to the third edge direction, may be a 90 degree direction, a 100 degree direction, etc., and this embodiment takes the third edge direction as the 0 degree direction and the fourth edge direction as the 90 degree direction as an example for explanation.
It should be noted that the third edge direction is a set of parallel lines forming an angle of 0 degree with the horizontal direction, and the fourth edge direction is a set of parallel lines forming an angle of 90 degrees with the horizontal direction, in this embodiment, the third edge direction passing through the to-be-processed interpolation pixel point is referred to as a main direction of the third edge direction, and the fourth edge direction passing through the to-be-processed interpolation pixel point is referred to as a main direction of the fourth edge direction.
40432. The receiving end takes interpolation pixel points to be processed as a center, and selects a plurality of pixel points on the enlarged image, wherein the pixel points comprise original pixel points and interpolation pixel points.
When the interpolation pixel point to be processed is located in the third position area of the designated original pixel point, the receiving end takes the interpolation pixel point to be processed as the center, and when a plurality of pixel points are selected on the enlarged image, the following two modes can be adopted:
in the first method, the receiving end selects a plurality of original pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center, and numbers the plurality of selected original pixel points in sequence from top to bottom and from left to right, in this embodiment, 16 original pixel points are selected as the centerPreferably. Referring to fig. 8, the gray beads are interpolation pixel points to be processed, the black beads are original pixel points, and the white beads are interpolation pixel points. The 16 selected original pixel points are numbered sequentially from top to bottom and from left to right, namely 0, 1, 2, 14 and 15 (hereinafter p is used for description)0、p1、p2、…、p15Representation).
In the second mode, the receiving end takes the interpolation pixel point to be processed as the center, selects a plurality of original pixel points and interpolation pixel points on the enlarged image, and numbers the selected plurality of original pixel points and interpolation pixel points according to the calculation requirement.
Because the third position area comprises the upper right corner or the lower left corner of the appointed original pixel point, when the interpolation pixel point to be processed is positioned at different positions of the third position area, the modes of selecting a plurality of original pixel points and interpolation pixel points on the enlarged image by the receiving end are different. For example, when the interpolation pixel to be processed is located at the upper right corner of the designated original pixel, the pixel can be selected in the manner shown in fig. 9, where the 0 th pixel, the 3 rd pixel, the 4 th pixel, the 5 th pixel, the 10 th pixel, the 11 th pixel, the 12 th pixel, and the 15 th pixel are interpolation pixels, and the 1 st pixel, the 2 nd pixel, the 6 th pixel, the 7 th pixel, the 8 th pixel, the 9 th pixel, the 13 th pixel, and the 14 th pixel are original pixels; when the interpolation pixel point to be processed is located at the lower left corner of the designated original pixel point, the pixel points can be selected in the manner shown in fig. 10, wherein the number 0 pixel point, the number 3 pixel point, the number 4 pixel point, the number 5 pixel point, the number 10 pixel point, the number 11 pixel point, the number 12 pixel point, and the number 15 pixel point are the original pixel points, and the number 1 pixel point, the number 2 pixel point, the number 6 pixel point, the number 7 pixel point, the number 8 pixel point, the number 9 pixel point, the number 13 pixel point, and the number 14 pixel point are the interpolation pixel points.
Taking fig. 9 as an example, the direction of the connection lines of the No. 6 pixel, the No. 7 pixel, the No. 8 pixel and the No. 9 pixel is the main direction of the third edge direction, and the direction of the connection lines of the No. 0 pixel, the No. 4 pixel, the No. 11 pixel and the No. 15 pixel is the main direction of the fourth edge direction.
40433. And the receiving terminal determines the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points.
Based on the pixel points selected in step 40432, when determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction, the receiving end may adopt the following two modes:
in the first mode, when all the pixels selected by the receiving end are original pixels, the receiving end may adopt the following steps (a) to (d) when determining the intensity values of the interpolation pixels to be processed in the third edge direction and the fourth edge direction according to a plurality of pixels:
(a) and the receiving end respectively acquires the color values of two adjacent original pixel points in the third edge direction and the fourth edge direction.
The two adjacent original pixel points in the third edge direction are two original pixel points which are closest to each other in the third edge direction, and referring to fig. 11, the two adjacent original pixel points in the third edge direction include a number 1 original pixel point and a number 2 original pixel point, a number 4 original pixel point and a number 5 original pixel point, a number 5 original pixel point and a number 6 original pixel point, a number 6 original pixel point and a number 7 original pixel point, a number 8 original pixel point and a number 9 original pixel point, a number 9 original pixel point and a number 10 original pixel point, a number 10 original pixel point and a number 11 original pixel point, a number 13 original pixel point and a number 14 original pixel point; two original pixel points adjacent to each other in the fourth edge direction are two original pixel points closest to each other in the fourth edge direction, and the two original pixel points adjacent to each other in the fourth edge direction include a number 4 original pixel point and a number 8 original pixel point, a number 1 original pixel point and a number 5 original pixel point, a number 5 original pixel point and a number 9 original pixel point, a number 9 original pixel point and a number 13 original pixel point, a number 2 original pixel point and a number 6 original pixel point, a number 6 original pixel point and a number 10 original pixel point, a number 10 original pixel point and a number 14 original pixel point, a number 7 original pixel point and a number 11 original pixel point.
When the receiving end obtains the color values of two adjacent original pixel points in the third edge direction and the fourth edge direction, the color values can be obtained based on the color code table. Taking the original pixel point No. 5 as an example, if the color code of the obtained original pixel point No. 5 is # EE82EEE, it can be known that the color values of the three colors of red, green and blue in the original pixel point No. 5 are 238, 130 and 238, respectively; if the color code of the obtained 5 th original pixel point is # FF00FF, it can be known that the color values of the three colors of red, green and blue in the 1 st original pixel point are 255, 0 and 255 respectively.
(b) And the receiving end acquires the difference value between the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction.
After the color values of two adjacent original pixel points in the third edge direction and the fourth edge direction are obtained, the receiving end subtracts the color values of the two adjacent original pixel points in the third edge direction to obtain a difference value between the color values of the two adjacent original pixel points in the third edge direction, and meanwhile, the receiving end subtracts the color values of the two adjacent original pixel points in the fourth edge direction to obtain a difference value between the color values of the two adjacent original pixel points in the fourth edge direction.
Referring to fig. 11, original pixel 1 and original pixel 2 are two adjacent original pixels in the third edge direction, and the color value of original pixel 1 is set to (199, 21, 133), and the color value of original pixel 2 is set to (218, 112, 214). When the receiving end obtains the difference between the color values of the original pixel point 1 and the original pixel point 2, the red color value in the original pixel point 1 and the red color value in the original pixel point 2 can be subtracted to obtain 199-218 which is-19, the green color value in the original pixel point 1 and the green color value in the original pixel point 2 are subtracted to obtain 21-112 which is-91, the blue color value in the original pixel point 1 and the blue color value in the original pixel point 2 are subtracted to obtain 133-214 which is-81, and finally the difference between the color values of the original pixel point 1 and the original pixel point 2 can be obtained as (-19, -91, -81).
(c) And the receiving end acquires the absolute value of the difference value between the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction.
The purpose of obtaining the difference value between the color values of two adjacent original pixel values in the third edge direction and the fourth edge direction is to obtain the gradient of the color values of two adjacent original pixel points in the third edge direction and the fourth edge direction, and in the actual calculation, the color values of two adjacent original pixel points in the third edge direction and the fourth edge direction may have a negative number.
For the absolute value of the difference between the color values of two adjacent original pixel points in the third edge direction, see the left diagram in fig. 11, wherein, the absolute value of the difference between the color values of original pixel point 1 and original pixel point 2 is a11, b21 is between the absolute value of the difference between the color values of original pixel point 4 and original pixel point 5, the absolute value of the difference between the color values of original pixel point 5 and original pixel point 6 is b22, the absolute value of the difference between the color values of original pixel point 6 and original pixel point 7 is b23, the absolute value of the difference between the color values of original pixel point 8 and original pixel point 9 is c31, the absolute value of the difference between the color values of original pixel point 9 and original pixel point 10 is c32, the absolute value of the difference between the color values of original pixel point 10 and original pixel point 11 is c33, and the absolute value of the difference between the color values of original pixel point 13 and original pixel point 14 is d 41.
For the absolute value of the difference between the color values of two adjacent original pixel points in the fourth edge direction, see the right diagram in fig. 11, wherein, the absolute value of the difference between the color values of original pixel 7 and original pixel 11 is a1, b1 is between the absolute value of the difference between the color values of original pixel 2 and original pixel 6, the absolute value of the difference between the color values of original pixel 6 and original pixel 10 is b2, the absolute value of the difference between the color values of original pixel 10 and original pixel 14 is b3, the absolute value of the difference between the color values of original pixel 1 and original pixel 5 is c1, the absolute value of the difference between the color values of original pixel 5 and original pixel 9 is c2, the absolute value of the difference between the color values of original pixel 9 and original pixel 13 is c3, and the absolute value of the difference between the color values of original pixel 4 and original pixel 8 is d 1.
(d) And the receiving end takes the sum of absolute values of differences between color values of two adjacent original pixel points and interpolation pixel points in the third edge direction as an intensity value in the third edge direction, and takes the sum of absolute values of differences between color values of two adjacent original pixel points and interpolation pixel points in the fourth edge direction as an intensity value in the fourth edge direction.
Based on the absolute value of the difference between the color values of the two adjacent original pixel points in the third edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixel points in the third edge direction as the intensity value in the third edge direction. Referring to the left image in fig. 11, the receiving end may obtain the intensity value in the third edge direction by adding the difference between the color values of two adjacent original pixel points in the third edge direction:
G3=a11+b21+b22+b23+c31+c32+d41
that is, the intensity value of the interpolation pixel point to be processed in the 0 degree direction is:
G3=a11+b21+b22+b23+c31+c32+d41。
based on the absolute value of the difference between the color values of the two adjacent original pixels in the fourth edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixels in the fourth edge direction as the intensity value in the fourth edge direction. Referring to the right diagram in fig. 11, by adding the difference between the color values of two adjacent original pixel points in the fourth edge direction, it can be obtained that the intensity value in the fourth edge direction is:
G4=a1+b1+b2+b3+c1+c2+c3+d1;
that is, the intensity value of the interpolation pixel point to be processed in the 90-degree direction is:
G4=a1+b1+b2+b3+c1+c2+c3+d1。
in the second mode, when the pixel points selected by the receiving end are original pixel points and interpolation pixel points, and the receiving end determines the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to a plurality of pixel points, the steps can be adopted: (a) (ii) to (d):
(a) and the receiving end respectively acquires the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction.
When the interpolation pixel point to be processed is located at the upper right corner of the designated original pixel point, referring to fig. 9, two pixel points adjacent to each other in the third edge direction are a number 1 original pixel point and a number 2 original pixel point, a number 7 original pixel point and a number 8 original pixel point, a number 13 original pixel point and a number 14 original pixel point, a number 3 interpolation pixel point and a number 4 interpolation pixel point, a number 4 interpolation pixel point and a number 5 interpolation pixel point, a number 10 interpolation pixel point and a number 11 interpolation pixel point, a number 11 interpolation pixel point and a number 12 interpolation pixel point; two adjacent pixel points in the fourth edge direction are a No. 3 interpolation pixel point and a No. 10 interpolation pixel point, a No. 4 interpolation pixel point and a No. 11 interpolation pixel point, a No. 5 interpolation pixel point and a No. 121 interpolation pixel point, a No. 1 original pixel point and a No. 7 original pixel point, a No. 7 original pixel point and a No. 13 original pixel point, a No. 2 original pixel point and a No. 8 original pixel point, a No. 8 original pixel point and a No. 14 original pixel point.
When the interpolation pixel point to be processed is located at the lower left corner of the designated original pixel point, referring to fig. 10, two adjacent pixel points in the third edge direction are an interpolation pixel point No. 1 and an interpolation pixel point No. 2, an interpolation pixel point No. 7 and an interpolation pixel point No. 8, an interpolation pixel point No. 13 and an interpolation pixel point No. 14, an original pixel point No. 3 and an original pixel point No. 4, an original pixel point No. 4 and an original pixel point No. 5, an original pixel point No. 10 and an original pixel point No. 11, an original pixel point No. 11 and an original pixel point No. 12; two adjacent pixel points in the fourth edge direction are a number 3 original pixel point and a number 10 original pixel point, a number 4 original pixel point and a number 11 original pixel point, a number 5 original pixel point and a number 12 original pixel point, a number 1 interpolation pixel point and a number 7 interpolation pixel point, a number 7 interpolation pixel point and a number 13 interpolation pixel point, a number 2 interpolation pixel point and a number 8 interpolation pixel point, a number 8 interpolation pixel point and a number 14 interpolation pixel point.
The receiving end can obtain the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction according to the color code table. Taking the interpolation pixel point No. 4 in fig. 9 as an example, if the color code #0008B of the interpolation pixel point No. 4 is obtained, the color values of the three colors of red, green, and blue in the interpolation pixel point No. 4 can be obtained as 0, and 139, respectively; if the color code #00080 of the interpolation pixel point No. 4 is obtained, the color values of the three colors of red, green and blue in the interpolation pixel point No. 4 can be obtained to be 0, 0 and 128 respectively.
(b) And the receiving end acquires the difference value between the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction.
After the color values of two adjacent original pixel points and the color values of the interpolation pixel points in the third edge direction are obtained, the color values of the two adjacent original pixel points are subtracted by the receiving end, the color values of the two adjacent interpolation pixel points in the third edge direction are subtracted, the interpolation between the colors of the two adjacent original pixel points and the color values of the interpolation pixel points in the third edge direction are obtained, similarly, after the color values of the two adjacent original pixel points and the color values of the interpolation pixel points in the fourth edge direction are obtained, the color values of the two adjacent original pixel points in the fourth edge direction are subtracted by the receiving end, the color values of the two adjacent interpolation pixel points are subtracted, and the interpolation between the colors of the two adjacent original pixel points and the color values of the interpolation pixel points in the fourth.
Referring to fig. 10, the interpolation pixel 7 and the interpolation pixel 8 are two adjacent interpolation pixels in the third edge direction, and the color value of the interpolation pixel 7 is set to (0, 139, 139), and the color value of the interpolation pixel 8 is set to (0, 128, 128). When the receiving end obtains the difference between the color values of the interpolation pixel point 7 and the interpolation pixel point 8, the red color value in the interpolation pixel point 7 and the red color value in the interpolation pixel point 8 may be subtracted to obtain 0-0 being 0, the green color value in the interpolation pixel point 7 and the green color value in the interpolation pixel point 8 are subtracted to obtain 139-128 being 11, the blue color value in the interpolation pixel point 7 and the blue color value in the interpolation pixel point 8 are subtracted to obtain 139-128 being 11, and finally the difference between the color values of the interpolation pixel point 7 and the interpolation pixel point 8 is (-19, -91, -81).
(c) And the receiving end acquires the absolute value of the difference value between the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction.
The purpose of obtaining the difference value between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction and the fourth edge direction is to obtain the gradient of the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction and the fourth edge direction, and the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction and the fourth edge direction may have negative number in practical calculation. For the absolute value of the difference between the color values of two adjacent original pixels and interpolated pixels in the third edge direction, taking fig. 9 as an example, the absolute value of the difference between the color values of the original pixel point 1 and the original pixel point 2 is | p1-p2|, the absolute value of the difference between the color values of the original pixel point 7 and the original pixel point 8 is | p7-p8|, the absolute value of the difference between the color values of the original pixel point 13 and the original pixel point 14 is | p13-p14|, the absolute value of the difference between the color values of the interpolation pixel point 3 and the interpolation pixel point 4 is | p3-p4|, the absolute value of the difference between the color values of the interpolation pixel point 4 and the interpolation pixel point 5 is | p4-p5|, the absolute value of the difference between the color values of the interpolation pixel point 10 and the interpolation pixel point 11 is | p10-p11|, and the absolute value of the difference between the color values of the interpolation pixel point 11 and the interpolation pixel point 12 is | p11-p 685.
For the absolute value of the difference between the color values of two adjacent original pixels and interpolated pixels in the fourth edge direction, still taking fig. 9 as an example, the absolute value of the difference between the color values of the interpolation pixel 3 and the interpolation pixel 10 is | p3-p10|, the absolute value of the difference between the color values of the interpolation pixel 4 and the interpolation pixel 11 is | p4-p11|, the absolute value of the difference between the color values of the interpolation pixel 5 and the interpolation pixel 12 is | p5-p12|, the absolute value of the difference between the color values of the original pixel 1 and the original pixel 7 is | p1-p7|, the absolute value of the difference between the color values of the original pixel 7 and the original pixel 13 is | p7-p13|, the absolute value of the difference between the color values of the original pixel 2 and the original pixel 8 is | p2-p8|, and the absolute value of the difference between the color values of the original pixel 8 and the original pixel 14 is | p8-p14 |.
(d) And the receiving end takes the sum of absolute values of differences between color values of two adjacent original pixel points and interpolation pixel points in the third edge direction as an intensity value in the third edge direction, and takes the sum of absolute values of differences between color values of two adjacent original pixel points and interpolation pixel points in the fourth edge direction as an intensity value in the fourth edge direction.
Based on the absolute value of the difference between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction as the intensity value in the third edge direction. Referring to fig. 9, the receiving end may obtain the intensity value G3 in the third edge direction by adding the difference between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction:
t1=|p1-p2|+|p7-p8|+|p13-p14|
t2=|p3-p4|+|p4-p5|
t3=|p10-p11|+|p11-p12|
G3=t1+t2+t3=|p1-p2|+|p7-p8|+|p13-p14|+|p3-p4|+|p4-p5|+|p10-p11|+|p11-p12|
that is, the intensity value of the interpolation pixel point to be processed in the 0 degree direction is:
G3=t1+t2+t3=|p1-p2|+|p7-p8|+|p13-p14|+|p3-p4|+|p4-p5|+|p10-p11|+|p11-p12|。
based on the absolute value of the difference between the color values of the two adjacent original pixels and the interpolation pixel in the fourth edge direction obtained in the step (c), the receiving end uses the sum of the absolute values of the difference between the color values of the two adjacent original pixels and the interpolation pixel in the fourth edge direction as the intensity value in the fourth edge direction. Referring to fig. 9, the receiving end may obtain the intensity value G4 in the fourth edge direction by adding the difference between the color values of two adjacent original pixel points and the interpolation pixel point in the fourth edge direction as follows:
t4=|p3-p10|+|p4-p11|+|p5-p12|
t5=|p1-p7|+|p7-p13|;
t6=|p2-p8|+|p8-p14|;
G4=t4+t5+t6=|p3-p10|+|p4-p11|+|p5-p12|+|p1-p7|+|p7-p13|+|p2-p8|+|p8-p14|。
that is, the intensity value of the interpolation pixel point to be processed in the 90-degree direction is:
G4=t4+t5+t6=|p3-p10|+|p4-p11|+|p5-p12|+|p1-p7|+|p7-p13|+|p2-p8|+|p8-p14|。
40434. and the receiving terminal judges whether the interpolation pixel point to be processed has a second strong edge direction according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction.
The second strong edge direction is a direction in which the intensity value changes less, and is generally a main direction of the third edge direction or a main direction of the fourth edge direction. When the receiving end judges whether the interpolation pixel point to be processed has the second strong edge direction according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction, the following steps (a) to (b) can be adopted:
(a) and the receiving end acquires the ratio of the intensity values in the third edge direction and the fourth edge direction.
In this embodiment, the ratio of the intensity values obtained by the receiving end in the third edge direction and the fourth edge direction is:
R2=(1+G3)/(1+G4);
wherein R is2Is the ratio of the intensity values in the third edge direction and the fourth edge direction, G3Is a third sideIntensity value in the edge direction, G4Is the intensity value in the direction of the fourth edge.
It should be noted that, in general, when obtaining the intensity values in the third edge direction and the fourth edge direction, the intensity values in the third edge direction and the fourth edge direction may be directly compared, but in this embodiment, 1 is added to the intensity values in the third edge direction and the fourth edge direction, mainly in order to prevent the intensity value in the third edge direction or the intensity value in the fourth edge direction from being 0, and the ratio of the obtained intensity values in the third edge direction and the fourth edge direction is 0.
(b) And the receiving end judges whether the interpolation pixel point to be processed has a second strong edge direction according to the ratio of the intensity values in the third edge direction and the fourth edge direction.
When the receiving end judges whether the interpolation pixel point to be processed has the second strong edge direction according to the ratio of the intensity values in the third edge direction and the fourth edge direction, the method includes but is not limited to the following steps:
if the ratio of the intensity values in the third edge direction and the fourth edge direction is greater than a second preset threshold value, determining that a second strong edge direction exists in the interpolation pixel point to be processed, and the main direction of the fourth edge direction is the second strong edge direction;
if the ratio of the intensity values in the third edge direction and the fourth edge direction is smaller than a second preset threshold value, determining that a second strong edge direction exists in the interpolation pixel point to be processed, and the main direction of the third edge direction is the second strong edge direction;
and if the ratio of the intensity values in the third edge direction and the fourth edge direction is equal to a second preset threshold value, determining that the interpolation pixel point to be processed does not have a second strong edge direction.
The second preset threshold may be 1, 1.5, 1.7, etc., and the second preset threshold is set to be 1.15 in this embodiment.
40435. And if the interpolation pixel point to be processed has a second strong edge direction, the receiving end acquires the pixel point in the second strong edge direction according to the plurality of pixel points, and performs interpolation calculation on the pixel point in the second strong edge direction to obtain the actual pixel value of the interpolation pixel point to be processed.
Based on the determined second strong edge direction, the receiving end firstly obtains the pixel value in the second strong edge direction from the plurality of pixel points. During specific acquisition, if the original pixel points exist in the second strong edge direction, the receiving end can directly acquire the pixel points in the second strong edge direction; if the original pixel point does not exist in the second strong edge direction, the receiving end acquires an interpolation pixel point in the second strong edge direction, the interpolation pixel point can be an interpolation pixel point in a plurality of pixel points selected by the receiving end, and the interpolation pixel point can be obtained by performing interpolation calculation on the original pixel point adjacent to the interpolation pixel point to be processed by adopting a bilinear interpolation algorithm for the receiving end.
Referring to fig. 8, when the receiving end selects a plurality of pixels around an interpolation pixel to be processed, the selected pixels are original pixels, and when the second strong edge direction is the third edge direction, the receiving end can use the original pixels 4, 5, 6, and 7 as pixels in the second strong edge direction because the original pixels are the original pixels 4, 5, 6, and 7 in the second strong edge direction; when the second strong edge direction is the fourth edge direction, the receiving end performs interpolation calculation on original pixel points around the interpolation pixel point to be processed by using a bilinear interpolation algorithm, for example, interpolation calculation can be performed on the original pixel point 1 and the original pixel point 2 to obtain an interpolation pixel point 1 ', linear interpolation calculation is performed on the original pixel point 5 and the original pixel point 6 to obtain an interpolation pixel point 2', interpolation calculation is performed on the original pixel point 9 and the original pixel point 10 to obtain an interpolation pixel point 3 ', interpolation calculation is performed on the original pixel point 13 and the original pixel point 14 to obtain an interpolation pixel point 4', and then the interpolation pixel point 1 ', the interpolation pixel point 2', the interpolation pixel point 3 ', and the interpolation pixel point 4' are used as pixel points in the obtained second strong edge direction.
Referring to fig. 9, when the receiving end selects a plurality of pixels with the interpolated pixel as the center, the selected plurality of pixels are the original pixels and the interpolated pixel, and when the second strong edge direction is the third edge direction, because the original pixels 6, 7, 8, 9 exist in the second strong edge direction, the receiving end can use the original pixels 6, 7, 8, 9 as the pixels in the second strong edge direction; when the second strong edge direction is the fourth edge direction, because there is no original pixel in the fourth edge direction, the receiving end can use the interpolation pixel 0, the interpolation pixel 4, the interpolation pixel 11, and the interpolation pixel 15 as pixels in the second strong edge direction.
Referring to fig. 10, when the receiving end selects a plurality of pixel points with the interpolated pixel point as the center, the selected plurality of pixel points are the original pixel points and the interpolated pixel points, and when the second strong edge direction is the third edge direction, because the original pixel points do not exist in the second strong edge direction, the receiving end can use the interpolated pixel point 6, the interpolated pixel point 7, the interpolated pixel point 8, and the interpolated pixel point 9 as pixel points in the second strong edge direction; when the second strong edge direction is the fourth edge direction, since the original pixel point 0, the original pixel point 4, the original pixel point 11, and the original pixel point 15 exist in the second strong edge direction, the receiving end uses the original pixel point 0, the original pixel point 4, the original pixel point 11, and the original pixel point 15 as pixel points in the second strong edge direction.
Based on the determined pixel points in the second strong edge direction, the receiving end calculates the interpolation coefficients corresponding to the original pixel points in the second strong edge direction by adopting a double cubic interpolation function, and then obtains the actual pixel values of the interpolation pixel points to be processed by performing weighted calculation on the pixel values of the original pixel points or the pixel values of the interpolation pixel points in the second strong edge direction and the corresponding interpolation coefficients. The interpolation coefficient determined by the bicubic interpolation function can be expressed by the formula (1):
Figure BDA0000867972030000371
wherein, a is the distance between two adjacent original pixel points or interpolation pixel points, and s is the distance between the original pixel point or interpolation pixel point and the interpolation pixel point to be processed. In this embodiment, the distance between two adjacent pixels or an interpolation pixel and an interpolation pixel to be processed is set to be 1/2, that is, a is 1/2, and then an interpolation coefficient determined by the bicubic interpolation function can be represented by the following formula (2):
Figure BDA0000867972030000372
when the second strong edge direction is the main direction of the third edge direction, referring to fig. 9, the original pixel point 6, the original pixel point 7, the original pixel point 8, and the original pixel point 9 of the pixel points in the main direction of the third edge direction, as can be seen from fig. 8, the distance between the original pixel point 6 and the interpolation pixel point to be processed is 3/2, the distance between the original pixel point 7 and the interpolation pixel point to be processed is 1/2, the distance between the original pixel point 8 and the interpolation pixel point to be processed is 1/2, and the distance between the original pixel point 9 and the interpolation pixel point to be processed is 3/2. Since the distance between the original pixel point 6 and the interpolation pixel point to be processed is 3/2, the calculation in the above formula (2) should be adopted, and at this time, when s is 3/2, the interpolation coefficient corresponding to the original pixel point 6 is-1/16; since the distance between the original pixel point 7 and the interpolation pixel point to be processed is 1/2, the above formula (2) should be adopted
Figure BDA0000867972030000381
Calculating, and substituting s-1/2 into the formula at this time to obtain an interpolation coefficient 9/16 corresponding to the original pixel point 7; since the distance between the original pixel 8 and the interpolation pixel to be processed is 1/2, the calculation in the above formula (2) should be adopted, and at this time, when s is 1/2, the interpolation coefficient corresponding to the original pixel 8 is 9/16; since the distance between the original pixel point 9 and the interpolation pixel point to be processed is 3/2, the calculation in the above formula (2) should be adopted, and at this time, the interpolation coefficient corresponding to the original pixel point 9 is-1/16 by substituting s-3/2 into the formula. Based on the determined second intensityThe receiving end can obtain the actual pixel value of the interpolation pixel point to be processed as value3 ═ 1/16 × p6+9/16 × p7+9/16 × p8-1/16 × p 9.
When the second strong edge direction is the main direction of the fourth edge direction, the distance between the interpolation pixel point 0 and the interpolation pixel point to be processed is 3/2, the distance between the interpolation pixel point 4 and the interpolation pixel point to be processed is 1/2, the distance between the interpolation pixel point 11 and the interpolation pixel point to be processed is 1/2, and the distance between the interpolation pixel point 15 and the interpolation pixel point to be processed is 3/2. Since the distance between the interpolation pixel 0 and the interpolation pixel to be processed is 3/2, the above formula (2) should be used
Figure BDA0000867972030000387
Calculation is carried out by substituting s-3/2 intoIn the method, the interpolation coefficient corresponding to the interpolation pixel point 0 is obtained as-1/16; since the distance between the interpolation pixel 4 and the interpolation pixel to be processed is 1/2, the above formula (2) should be adopted
Figure BDA0000867972030000389
The calculation is carried out by substituting s-1/2 into the formula
Figure BDA00008679720300003810
In the step (2), the interpolation coefficient corresponding to the interpolation pixel point 4 is obtained as 9/16; since the distance between the interpolation pixel 11 and the interpolation pixel to be processed is 1/2, the above formula (2) should be used
Figure BDA0000867972030000391
The calculation is carried out by substituting s-1/2 into the formula
Figure BDA0000867972030000393
In the step (2), the interpolation coefficient corresponding to the obtained interpolation pixel 11 is 9/16; since the distance between the interpolation pixel 15 and the interpolation pixel to be processed is 3/2, the above formula (2) should be used
Figure BDA0000867972030000394
The calculation is carried out by substituting s-3/2 into the formula
Figure BDA0000867972030000395
The interpolation coefficient corresponding to the interpolation pixel 15 obtained in the step (a) is-1/16. Based on the determined interpolation pixel point 0, interpolation pixel point 4, interpolation pixel point 11, interpolation pixel point 15 and corresponding coefficients in the second strong edge direction, the receiving end can obtain that the actual pixel value of the interpolation pixel point to be processed is value4 ═ 1/16 × p0+9/16 × p4+9/16 × p11-1/16 × p 15.
When the interpolation pixel point to be processed is located at the lower left corner of the designated original pixel point, the manner of acquiring the interpolation coefficient corresponding to each pixel point in the direction of the second strong edge by the receiving end is the same as the manner of acquiring the coefficient corresponding to each pixel point in the direction of the second strong edge by the receiving end when the interpolation pixel point to be processed is located at the upper right corner of the designated original pixel point, and details are not repeated here.
40436. If the interpolation pixel point to be processed does not have the second strong edge direction, the receiving end obtains pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction according to the plurality of pixel points, interpolation calculation is respectively carried out on the pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction, interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
For the process of obtaining the pixel points located in the main direction of the third edge direction and the main direction of the fourth edge direction and performing interpolation calculation on the pixel points located in the main direction of the first edge direction and the main direction of the second edge direction, refer to step 40435 described above, and details are not described here.
Based on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, the receiving end performs weighting operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained as follows:
P2=(w3*value3+w4*value4)/(w3+w4)
Figure BDA0000867972030000401
Figure BDA0000867972030000402
wherein, P2Is the actual pixel value, w, of the interpolated pixel point to be processed3A weight value corresponding to the interpolation calculation result in the main direction of the third edge direction3For the interpolation result in the main direction of the third edge direction, w4A weight value corresponding to the interpolation calculation result in the main direction of the fourth edge direction4The interpolation calculation result in the main direction of the fourth edge direction, k is a designated parameter, and can be obtained through testing, and the value of k is set to be 5 in this embodiment.
It should be noted that, the above is described by taking the first edge direction as the 45 degree direction, the second edge direction as the 135 degree direction, the third edge direction as the 0 degree direction, and the fourth edge direction as the 90 degree direction as an example, in practical applications, the first edge direction, the second edge direction, the third edge direction, and the fourth edge direction may also be other directions, and if the first edge direction, the second edge direction, the third edge direction, and the fourth edge direction are other directions, when the receiving end acquires the intensity values in the first edge direction, the second edge direction, the third edge direction, and the fourth edge direction, the receiving end may perform calculation based on the intensity values in the 0 degree direction, the 45 degree direction, the 90 degree direction, and the 135 degree edge direction acquired in this embodiment.
In addition, the method provided in this embodiment is described by taking an example of amplifying an original image by 2 times, randomly selecting one interpolation pixel point on the image amplified by 2 times, and calculating a pixel value of the interpolation pixel point, and in practical applications, the method provided in this embodiment is also suitable for calculating an actual pixel value of any interpolation pixel point on an amplified image such as a 4-time amplified image, an 8-time amplified image, and the like, and the specific calculation principle is the same as the calculation method for any interpolation pixel point on the image amplified by 2 times, which is described herein again.
Referring to fig. 12, fig. 12 is a flow chart of an amplification enhancement algorithm framework in network video chat, and it is set that a receiving end needs to amplify an image by 2 times, after receiving video data, the receiving end inputs an amplification ratio of 2.0 in its own operating system, and arbitrarily selects an interpolation pixel point (i, j) to be processed on the amplified image, and further calculates an actual pixel value of the interpolation pixel point to be processed in different ways according to a position relationship between the interpolation pixel point to be processed and an assigned original pixel point. If the interpolation pixel point to be processed is positioned at the upper left corner of the appointed original pixel point, selecting an original pixel point which is at the distance from the interpolation pixel point to be processed around the interpolation pixel point to be processed, and taking the pixel value of the original pixel point which is closest to the interpolation pixel point to be processed as the actual pixel value of the interpolation pixel point to be processed; if the interpolation pixel point to be processed is positioned at the lower right corner of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode; and if the interpolation pixel point to be processed is positioned at the upper right corner or the lower left corner of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode. And finally, sharpening and enhancing the 2-time amplified image, and displaying the amplified image on a display screen.
Referring to fig. 13, fig. 13 is a flow chart of an amplification enhancement algorithm framework in network video chat, and it is set that a receiving end needs to amplify an image by 2 times, after receiving video data, the receiving end inputs an amplification ratio of 2.0 in its own operating system, and arbitrarily selects an interpolation pixel point (i, j) to be processed on the amplified image, and further calculates an actual pixel value of the interpolation pixel point to be processed in different ways according to a position relationship between the interpolation pixel point to be processed and an assigned original pixel point. If the interpolation pixel point to be processed is positioned at the upper left corner of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a bilinear interpolation algorithm; if the interpolation pixel point to be processed is positioned at the lower right corner of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode; and if the interpolation pixel point to be processed is positioned at the upper right corner or the lower left corner of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode. And finally, sharpening and enhancing the 2-time amplified image, and displaying the amplified image on a display screen.
In order to visually demonstrate the difference between the image processed by the method provided by the embodiment and the image quality processed by the prior art, comparison is performed by some images.
Fig. 14 is an image processed by the methods provided by the prior art and the embodiments of the present invention, where the left image in fig. 14 is the image processed by the prior art, and the right image in fig. 14 is the image processed by the method provided by the present embodiment, and comparing the left image in fig. 14 with the right image in fig. 14, it can be found that the quality of the image can be significantly improved by the method provided by the present embodiment, and the processed image is clearer.
Fig. 15 shows an image processed by the method provided in this embodiment, fig. 16 is an image enlarged by the photo Zoom using the S-Spline Max algorithm, and fig. 17 is an image enlarged by the bicubic interpolation algorithm in the GIMP software. As can be seen by comparing fig. 15, 16, and 17: fig. 15 and fig. 16 have less jaggies, that is, the image processed by the method provided in this embodiment has an effect similar to that of the image processed by photo Zoom; the jaggies in fig. 17 are significantly greater than those in fig. 15, illustrating that the present embodiment provides significantly greater image quality after the enlargement processing than the prior art.
Fig. 18 is an original before enlargement processing, fig. 19 is an image processed by the method provided in the present embodiment, fig. 20 is an image processed by photo Zoom, and fig. 21 is an image processed by bicubic interpolation. As can be seen from comparing fig. 18, 19, 20, and 21, fig. 19 and 20 have almost no jaggies, and the image quality is better, and the jaggies of fig. 21 are more noticeable.
As can be seen from the comparison, the image processed by the method provided by the embodiment has better quality, and not only can less jaggies on the image be amplified, but also the definition of the image can be improved.
According to the method provided by the embodiment of the invention, when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.
Referring to fig. 22, an embodiment of the present invention provides a schematic structural diagram of an image processing apparatus, where the apparatus includes:
an obtaining module 2201, configured to obtain a position coordinate of any interpolation pixel to be processed on the amplified image, where the interpolation pixel to be processed is a pixel generated when an assigned original pixel is amplified by a preset multiple;
a determining module 2202, configured to determine, according to the position coordinates, a position relationship between an interpolation pixel point to be processed and an appointed original pixel point;
a calculating module 2203, configured to calculate, according to a positional relationship between the to-be-processed interpolation pixel point and the designated original pixel point, an actual pixel value of the to-be-processed interpolation pixel point by using a calculation method corresponding to the positional relationship;
wherein the different positional relationships correspond to different calculation manners.
In another embodiment of the invention, the calculation module 2203 comprises:
the first calculation unit is used for calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode when the interpolation pixel point to be processed is positioned in the first position area of the appointed original pixel point;
the second calculation unit is used for calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode when the interpolation pixel point to be processed is positioned in the second position area of the appointed original pixel point;
and the third calculating unit is used for calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculating mode when the interpolation pixel point to be processed is positioned in the third position area of the appointed original pixel point.
In another embodiment of the present invention, the first calculating unit is configured to obtain a pixel value of an original pixel closest to the interpolation pixel to be processed, and use the pixel value of the original pixel as an actual pixel value of the interpolation pixel to be processed; or;
the first calculation unit is used for acquiring pixel values of two original pixel points which are closest to the interpolation pixel point to be processed, and performing interpolation calculation on the pixel values of the two original pixel points by adopting a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
In another embodiment of the present invention, the second calculating unit is configured to obtain a first edge direction and a second edge direction of an interpolation pixel to be processed; selecting a plurality of original pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center; determining the strength values of interpolation pixel points to be processed in a first edge direction and a second edge direction according to a plurality of original pixel points; judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction; when the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from a plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed; when the interpolation pixel point to be processed does not have a first strong edge direction, the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction are obtained from the plurality of original pixel points, interpolation calculation is respectively carried out on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
In another embodiment of the present invention, the second calculating unit is configured to obtain color values of two adjacent original pixel points in the first edge direction and the second edge direction, respectively; obtaining a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction; acquiring an absolute value of a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction; and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the first edge direction as the intensity value in the first edge direction, and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the second edge direction as the intensity value in the second edge direction.
In another embodiment of the present invention, the second calculation unit is configured to obtain a ratio of intensity values in the first edge direction and the second edge direction; and judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the ratio of the intensity values in the first edge direction and the second edge direction.
In another embodiment of the present invention, the ratio of the intensity values in the first edge direction and the second edge direction obtained by the second calculating unit is:
R1=(1+G1)/(1+G2);
wherein R is1Is the ratio of the intensity values in the first edge direction and the second edge direction, G1Is the intensity value in the first edge direction, G2In the direction of the second edgeThe intensity value of (a).
In another embodiment of the present invention, the second calculating unit is configured to determine that a first strong edge direction exists in the to-be-processed interpolation pixel when a ratio of intensity values in the first edge direction and the second edge direction is not equal to a first preset threshold; and when the ratio of the intensity values in the first edge direction and the second edge direction is equal to a first preset threshold value, determining that the interpolation pixel point to be processed does not have the first strong edge direction.
In another embodiment of the present invention, the second calculating unit performs weighting operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the obtained actual pixel value of the to-be-processed interpolation pixel point is:
P1=(w1*value1+w2*value2)/(w1+w2);
Figure BDA0000867972030000451
wherein, P1Is the actual pixel value, w, of the interpolated pixel point to be processed1A weight value corresponding to the result of the interpolation in the main direction of the first edge direction1W2 is the weight value corresponding to the interpolation result in the main direction of the first edge direction2K is a specified parameter, which is the interpolation calculation result in the principal direction of the second edge direction.
In another embodiment of the present invention, the third calculating unit is configured to obtain a third edge direction and a fourth edge direction of an interpolation pixel to be processed; selecting a plurality of pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center, wherein the pixel points comprise an original pixel point and an interpolation pixel point; determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points; judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction; when the interpolation pixel point to be processed has a second strong edge direction, acquiring a pixel point in the second strong edge direction according to a plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed; when the interpolation pixel point to be processed does not have the second strong edge direction, the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction are obtained according to the multiple pixel points, interpolation calculation is respectively carried out on the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction, the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
In another embodiment of the present invention, the third calculating unit is configured to obtain color values of two adjacent original pixel points and interpolated pixel points in a third edge direction and a fourth edge direction, respectively; obtaining a difference value between color values of two adjacent original pixel points and interpolation pixel points in a third edge direction and a fourth edge direction; acquiring the absolute value of the difference value between the color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction; and taking the sum of the absolute values of the differences between the color values of the two adjacent original pixels and the interpolation pixel in the third edge direction as the intensity value in the third edge direction, and taking the sum of the absolute values of the differences between the color values of the two adjacent original pixels and the interpolation pixel in the fourth edge direction as the intensity value in the fourth edge direction.
In another embodiment of the present invention, the third calculation unit is configured to obtain a ratio of intensity values in a third edge direction and a fourth edge direction; and judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the ratio of the intensity values in the third edge direction and the fourth edge direction.
In another embodiment of the present invention, the ratio of the intensity values in the third edge direction and the fourth edge direction obtained by the third calculating unit is:
R2=(1+G3)/(1+G4);
wherein R is2Is the ratio of the intensity values in the third edge direction and the fourth edge direction, G3As an intensity value in the direction of the third edge, G4Is the intensity value in the direction of the fourth edge.
In another embodiment of the present invention, the third calculating unit is configured to determine that a second strong edge direction exists in the to-be-processed interpolation pixel point when a ratio of intensity values in the third edge direction and the fourth edge direction is not equal to a second preset threshold; and when the ratio of the intensity values in the third edge direction and the fourth edge direction is equal to a second preset threshold value, determining that the interpolation pixel point to be processed does not have a second strong edge direction.
In another embodiment of the present invention, the third calculating unit is configured to, when original pixel points exist in the second strong edge direction among the plurality of pixel points, obtain the original pixel points in the second strong edge direction; and when the original pixel points do not exist in the second strong edge direction of the plurality of pixel points, obtaining the interpolation pixel points in the second strong edge direction.
In another embodiment of the present invention, the third calculating unit performs weighting operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the obtained actual pixel value of the interpolation pixel to be processed is:
P2=(w3*value3+w4*value4)/(w3+w4);
Figure BDA0000867972030000461
Figure BDA0000867972030000462
wherein, P2Is the actual pixel value, w, of the interpolated pixel point to be processed3A weight value corresponding to the interpolation calculation result in the main direction of the third edge direction3For the interpolation result in the main direction of the third edge direction, w4A weight value corresponding to the interpolation calculation result in the main direction of the fourth edge direction4And k is a specified parameter, and is an interpolation calculation result in the main direction of the fourth edge direction.
In another embodiment of the present invention, the first position region is an upper left corner of the designated original pixel point, the second position region is a lower right corner of the designated original pixel point, and the third position region is an upper right corner or a lower left corner of the designated original pixel point.
In another embodiment of the present invention, the first edge direction is a 45 degree direction, the second edge direction is a 135 degree direction, the third edge direction is a 0 degree direction, and the fourth edge direction is a 90 degree direction.
In summary, the device provided in the embodiment of the present invention comprehensively considers the directionality between the interpolation pixel point and the original pixel point when calculating the actual pixel value of each interpolation pixel point on the amplified image, and performs calculation in different calculation manners according to the position relationship between each interpolation pixel point and the original pixel point, thereby significantly improving the definition of the image and enabling the processed image to have better quality.
Referring to fig. 23, a schematic structural diagram of an image processing terminal according to an embodiment of the present invention is shown, where the terminal can be used to implement the image processing method provided in the above-mentioned embodiment. Specifically, the method comprises the following steps:
the terminal 2300 may include components such as an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer-readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a WiFi (Wireless Fidelity) module 170, a processor 180 including one or more processing cores, and a power supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 23 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then sends the received downlink information to the one or more processors 180 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (short messaging Service), etc.
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 2300, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 120 may further include a memory controller to provide the processor 180 and the input unit 130 with access to the memory 120.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 may also include other input devices 132. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by or provided to a user and various graphic user interfaces of the terminal 2300, which may be configured of graphics, text, icons, video, and any combination thereof. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in FIG. 23, touch-sensitive surface 131 and display panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated with display panel 141 to implement input and output functions.
The terminal 2300 can also include at least one sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 141 and/or the backlight when the terminal 2300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the terminal 2300, detailed descriptions thereof are omitted.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and terminal 2300. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the processor 180 for processing, and then to the RF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 120 for further processing. The audio circuit 160 may also include an earbud jack to provide communication of a peripheral headset with the terminal 2300.
WiFi belongs to a short-distance wireless transmission technology, and the terminal 2300 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 170, and provides wireless broadband internet access for the user. Although fig. 23 shows the WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal 2300 and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is a control center of the terminal 2300, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the terminal 2300 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby monitoring the entire handset. Optionally, processor 180 may include one or more processing cores; optionally, the processor 180 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The terminal 2300 also includes a power supply 190 (e.g., a battery) for powering the various components, and preferably, the power supply may be logically coupled to the processor 180 via a power management system to manage charging, discharging, and power consumption via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal 2300 may further include a camera, a bluetooth module, etc., which will not be described herein. In this embodiment, the display unit of the terminal 2300 is a touch screen display, and the terminal 2300 further includes a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for:
acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when the appointed original pixel point is amplified by a preset multiple;
determining the position relation between an interpolation pixel point to be processed and an appointed original pixel point according to the position coordinates;
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point;
wherein the different positional relationships correspond to different calculation manners.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point, wherein the calculation mode comprises the following steps:
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode;
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode;
and if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode.
Assuming that the above is the second possible implementation manner, in a third possible implementation manner provided on the basis of the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, the actual pixel value of the interpolation pixel point to be processed is calculated by adopting a first calculation mode, and the method comprises the following steps:
acquiring a pixel value of an original pixel point closest to the interpolation pixel point to be processed, and taking the pixel value of the original pixel point as an actual pixel value of the interpolation pixel point to be processed; or;
and acquiring pixel values of two original pixel points which are closest to the interpolation pixel point to be processed, and performing interpolation calculation on the pixel values of the two original pixel points by adopting a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
Assuming that the above is the third possible implementation manner, in a fourth possible implementation manner provided on the basis of the third possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, the actual pixel value of the interpolation pixel point to be processed is calculated by adopting a second calculation mode, and the method comprises the following steps:
acquiring a first edge direction and a second edge direction of an interpolation pixel point to be processed;
selecting a plurality of original pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center;
determining the strength values of interpolation pixel points to be processed in a first edge direction and a second edge direction according to a plurality of original pixel points;
judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction;
if the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from a plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the first strong edge direction, the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction are obtained from the plurality of original pixel points, interpolation calculation is respectively carried out on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
Assuming that the above is the fourth possible implementation manner, in a fifth possible implementation manner provided on the basis of the fourth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of original pixel points, determining the strength values of the interpolation pixel points to be processed in the first edge direction and the second edge direction, wherein the strength values comprise:
respectively obtaining color values of two adjacent original pixel points in a first edge direction and a second edge direction;
obtaining a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
acquiring an absolute value of a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the first edge direction as the intensity value in the first edge direction, and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the second edge direction as the intensity value in the second edge direction.
Assuming that the above is the fifth possible implementation manner, in a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the intensity values of the interpolation pixel points to be processed in the first edge direction and the second edge direction, whether the interpolation pixel points to be processed have the first strong edge direction is judged, and the method comprises the following steps:
obtaining the ratio of the intensity values in the first edge direction and the second edge direction;
and judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the ratio of the intensity values in the first edge direction and the second edge direction.
Assuming that the above is the sixth possible implementation manner, in a seventh possible implementation manner provided on the basis of the sixth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the ratio of the obtained intensity values in the first edge direction and the second edge direction is:
R1=(1+G1)/(1+G2);
wherein R is1Is the ratio of the intensity values in the first edge direction and the second edge direction, G1Is the intensity value in the first edge direction, G2Is the intensity value in the direction of the second edge.
Assuming that the above is the seventh possible implementation manner, in an eighth possible implementation manner provided on the basis of the seventh possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the ratio of the intensity values in the first edge direction and the second edge direction, whether the interpolation pixel point to be processed has the first strong edge direction is judged, including:
if the ratio of the intensity values in the first edge direction and the second edge direction is not equal to a first preset threshold value, determining that a first strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio of the intensity values in the first edge direction and the second edge direction is equal to a first preset threshold value, determining that the interpolation pixel point to be processed does not have a first strong edge direction.
Assuming that the above is the eighth possible implementation manner, in a ninth possible implementation manner provided on the basis of the eighth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
performing weighted operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and obtaining the actual pixel value of the interpolation pixel point to be processed as follows:
P1=(w1*value1+w2*value2)/(w1+w2);
Figure BDA0000867972030000541
Figure BDA0000867972030000542
wherein, P1Is the actual pixel value, w, of the interpolated pixel point to be processed1A weight value corresponding to the result of the interpolation in the main direction of the first edge direction1W2 is the weight value corresponding to the interpolation result in the main direction of the first edge direction2K is a specified parameter, which is the interpolation calculation result in the principal direction of the second edge direction.
Assuming that the above is the ninth possible implementation manner, in a tenth possible implementation manner provided on the basis of the ninth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
and if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode, wherein the calculation mode comprises the following steps:
acquiring a third edge direction and a fourth edge direction of an interpolation pixel point to be processed;
selecting a plurality of pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center, wherein the pixel points comprise an original pixel point and an interpolation pixel point;
determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points;
judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction;
if the interpolation pixel point to be processed has a second strong edge direction, acquiring a pixel point in the second strong edge direction according to a plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the second strong edge direction, the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction are obtained according to the multiple pixel points, interpolation calculation is respectively carried out on the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction, the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
Assuming that the above is the tenth possible implementation manner, in an eleventh possible implementation manner provided on the basis of the tenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of pixel points, determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction, including:
respectively obtaining color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining interpolation values between color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining an absolute value of interpolation between color values of two adjacent original pixel points and an interpolation pixel point in a third edge direction and a fourth edge direction;
and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction as the intensity value in the third edge direction, and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the fourth edge direction as the intensity value in the fourth edge direction.
Assuming that the above is the eleventh possible implementation manner, in a twelfth possible implementation manner provided on the basis of the eleventh possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction, whether the interpolation pixel points to be processed have the second strong edge direction or not is judged, and the method comprises the following steps:
obtaining the ratio of the intensity values in the third edge direction and the fourth edge direction;
and judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the ratio of the intensity values in the third edge direction and the fourth edge direction.
Assuming that the above is the twelfth possible implementation manner, in a thirteenth possible implementation manner provided on the basis of the twelfth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the ratio of the obtained intensity values in the third edge direction and the fourth edge direction is as follows:
R2=(1+G3)/(1+G4);
wherein R is2Is the ratio of the intensity values in the third edge direction and the fourth edge direction, G3As an intensity value in the direction of the third edge, G4Is the intensity value in the direction of the fourth edge.
Assuming that the above is the thirteenth possible implementation manner, in a fourteenth possible implementation manner provided on the basis of the thirteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the ratio of the intensity values in the third edge direction and the fourth edge direction, whether a second strong edge direction exists in the interpolation pixel point to be processed is determined, and the method comprises the following steps:
if the ratio of the intensity values in the third edge direction and the fourth edge direction is not equal to a second preset threshold value, determining that a second strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio is equal to a second preset threshold, determining that the interpolation pixel point to be processed does not have a second strong edge direction.
Assuming that the foregoing is a fourteenth possible implementation manner, in a fifteenth possible implementation manner provided on the basis of the fourteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of pixel points, acquiring the pixel points in the second strong edge direction, including:
if the original pixel points exist in the second strong edge direction of the plurality of pixel points, the original pixel points in the second strong edge direction are obtained;
and if the original pixel points do not exist in the second strong edge direction of the plurality of pixel points, obtaining the interpolation pixel points in the second strong edge direction.
Assuming that the above is the fifteenth possible implementation manner, in a sixteenth possible implementation manner provided on the basis of the fifteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
performing weighted operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, wherein the obtained actual pixel value of the interpolation pixel point to be processed is as follows:
P2=(w3*value3+w4*value4)/(w3+w4);
Figure BDA0000867972030000571
Figure BDA0000867972030000572
wherein, P2Is the actual pixel value, w, of the interpolated pixel point to be processed3Weights corresponding to the interpolation result in the main direction of the third edge directionWeight value3For the interpolation result in the main direction of the third edge direction, w4A weight value corresponding to the interpolation calculation result in the main direction of the fourth edge direction4And k is a specified parameter, and is an interpolation calculation result in the main direction of the fourth edge direction.
Assuming that the above is the sixteenth possible implementation manner, in a seventeenth possible implementation manner provided on the basis of the sixteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the first position area is the upper left corner of the designated original pixel point, the second position area is the lower right corner of the designated original pixel point, and the third position area is the upper right corner or the lower left corner of the designated original pixel point.
Assuming that the above is a seventeenth possible implementation manner, in an eighteenth possible implementation manner provided on the basis of the seventeenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the first edge direction is a 45 degree direction, the second edge direction is a 135 degree direction, the third edge direction is a 0 degree direction, and the fourth edge direction is a 90 degree direction.
According to the terminal provided by the embodiment of the invention, when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer-readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform an image processing method, the method comprising:
acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when the appointed original pixel point is amplified by a preset multiple;
determining the position relation between an interpolation pixel point to be processed and an appointed original pixel point according to the position coordinates;
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point;
wherein the different positional relationships correspond to different calculation manners.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point, wherein the calculation mode comprises the following steps:
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode;
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode;
and if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode.
Assuming that the above is the second possible implementation manner, in a third possible implementation manner provided on the basis of the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, the actual pixel value of the interpolation pixel point to be processed is calculated by adopting a first calculation mode, and the method comprises the following steps:
acquiring a pixel value of an original pixel point closest to the interpolation pixel point to be processed, and taking the pixel value of the original pixel point as an actual pixel value of the interpolation pixel point to be processed; or;
and acquiring pixel values of two original pixel points which are closest to the interpolation pixel point to be processed, and performing interpolation calculation on the pixel values of the two original pixel points by adopting a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
Assuming that the above is the third possible implementation manner, in a fourth possible implementation manner provided on the basis of the third possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, the actual pixel value of the interpolation pixel point to be processed is calculated by adopting a second calculation mode, and the method comprises the following steps:
acquiring a first edge direction and a second edge direction of an interpolation pixel point to be processed;
selecting a plurality of original pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center;
determining the strength values of interpolation pixel points to be processed in a first edge direction and a second edge direction according to a plurality of original pixel points;
judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction;
if the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from a plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the first strong edge direction, the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction are obtained from the plurality of original pixel points, interpolation calculation is respectively carried out on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
Assuming that the above is the fourth possible implementation manner, in a fifth possible implementation manner provided on the basis of the fourth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of original pixel points, determining the strength values of the interpolation pixel points to be processed in the first edge direction and the second edge direction, wherein the strength values comprise:
respectively obtaining color values of two adjacent original pixel points in a first edge direction and a second edge direction;
obtaining a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
acquiring an absolute value of a difference value between color values of two adjacent original pixel points in a first edge direction and a second edge direction;
and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the first edge direction as the intensity value in the first edge direction, and taking the sum of the absolute values of the differences between the color values of two adjacent original pixels in the second edge direction as the intensity value in the second edge direction.
Assuming that the above is the fifth possible implementation manner, in a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the intensity values of the interpolation pixel points to be processed in the first edge direction and the second edge direction, whether the interpolation pixel points to be processed have the first strong edge direction is judged, and the method comprises the following steps:
obtaining the ratio of the intensity values in the first edge direction and the second edge direction;
and judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the ratio of the intensity values in the first edge direction and the second edge direction.
Assuming that the above is the sixth possible implementation manner, in a seventh possible implementation manner provided on the basis of the sixth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the ratio of the obtained intensity values in the first edge direction and the second edge direction is:
R1=(1+G1)/(1+G2);
wherein R is1Is the ratio of the intensity values in the first edge direction and the second edge direction, G1Is the intensity value in the first edge direction, G2Is the intensity value in the direction of the second edge.
Assuming that the above is the seventh possible implementation manner, in an eighth possible implementation manner provided on the basis of the seventh possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the ratio of the intensity values in the first edge direction and the second edge direction, whether the interpolation pixel point to be processed has the first strong edge direction is judged, including:
if the ratio of the intensity values in the first edge direction and the second edge direction is not equal to a first preset threshold value, determining that a first strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio of the intensity values in the first edge direction and the second edge direction is equal to a first preset threshold value, determining that the interpolation pixel point to be processed does not have a first strong edge direction.
Assuming that the above is the eighth possible implementation manner, in a ninth possible implementation manner provided on the basis of the eighth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
performing weighted operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and obtaining the actual pixel value of the interpolation pixel point to be processed as follows:
P1=(w1*value1+w2*value2)/(w1+w2);
Figure BDA0000867972030000621
Figure BDA0000867972030000622
wherein, P1Is the actual pixel value, w, of the interpolated pixel point to be processed1A weight value corresponding to the result of the interpolation in the main direction of the first edge direction1W2 is the weight value corresponding to the interpolation result in the main direction of the first edge direction2K is a specified parameter, which is the interpolation calculation result in the principal direction of the second edge direction.
Assuming that the above is the ninth possible implementation manner, in a tenth possible implementation manner provided on the basis of the ninth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
and if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode, wherein the calculation mode comprises the following steps:
acquiring a third edge direction and a fourth edge direction of an interpolation pixel point to be processed;
selecting a plurality of pixel points on the enlarged image by taking an interpolation pixel point to be processed as a center, wherein the pixel points comprise an original pixel point and an interpolation pixel point;
determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points;
judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction;
if the interpolation pixel point to be processed has a second strong edge direction, acquiring a pixel point in the second strong edge direction according to a plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the second strong edge direction, the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction are obtained according to the multiple pixel points, interpolation calculation is respectively carried out on the pixel points in the main direction of the third edge direction and the pixel points in the main direction of the fourth edge direction, the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction are obtained, weighting calculation is carried out on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and the actual pixel value of the interpolation pixel point to be processed is obtained.
Assuming that the above is the tenth possible implementation manner, in an eleventh possible implementation manner provided on the basis of the tenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of pixel points, determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction, including:
respectively obtaining color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining interpolation values between color values of two adjacent original pixel points and interpolation pixel points in the third edge direction and the fourth edge direction;
obtaining an absolute value of interpolation between color values of two adjacent original pixel points and an interpolation pixel point in a third edge direction and a fourth edge direction;
and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the third edge direction as the intensity value in the third edge direction, and taking the sum of the absolute values of the interpolation between the color values of the two adjacent original pixel points and the interpolation pixel point in the fourth edge direction as the intensity value in the fourth edge direction.
Assuming that the above is the eleventh possible implementation manner, in a twelfth possible implementation manner provided on the basis of the eleventh possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction, whether the interpolation pixel points to be processed have the second strong edge direction or not is judged, and the method comprises the following steps:
obtaining the ratio of the intensity values in the third edge direction and the fourth edge direction;
and judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the ratio of the intensity values in the third edge direction and the fourth edge direction.
Assuming that the above is the twelfth possible implementation manner, in a thirteenth possible implementation manner provided on the basis of the twelfth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the ratio of the obtained intensity values in the third edge direction and the fourth edge direction is as follows:
R2=(1+G3)/(1+G4);
wherein R is2Is the ratio of the intensity values in the third edge direction and the fourth edge direction, G3As an intensity value in the direction of the third edge, G4Is the intensity value in the direction of the fourth edge.
Assuming that the above is the thirteenth possible implementation manner, in a fourteenth possible implementation manner provided on the basis of the thirteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to the ratio of the intensity values in the third edge direction and the fourth edge direction, whether a second strong edge direction exists in the interpolation pixel point to be processed is determined, and the method comprises the following steps:
if the ratio of the intensity values in the third edge direction and the fourth edge direction is not equal to a second preset threshold value, determining that a second strong edge direction exists in the interpolation pixel point to be processed;
and if the ratio is equal to a second preset threshold, determining that the interpolation pixel point to be processed does not have a second strong edge direction.
Assuming that the foregoing is a fourteenth possible implementation manner, in a fifteenth possible implementation manner provided on the basis of the fourteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
according to a plurality of pixel points, acquiring the pixel points in the second strong edge direction, including:
if the original pixel points exist in the second strong edge direction of the plurality of pixel points, the original pixel points in the second strong edge direction are obtained;
and if the original pixel points do not exist in the second strong edge direction of the plurality of pixel points, obtaining the interpolation pixel points in the second strong edge direction.
Assuming that the above is the fifteenth possible implementation manner, in a sixteenth possible implementation manner provided on the basis of the fifteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
performing weighted operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, wherein the obtained actual pixel value of the interpolation pixel point to be processed is as follows:
P2=(w3*value3+w4*value4)/(w3+w4);
Figure BDA0000867972030000651
Figure BDA0000867972030000652
wherein, P2Is the actual pixel value, w, of the interpolated pixel point to be processed3A weight value corresponding to the interpolation calculation result in the main direction of the third edge direction3For the interpolation result in the main direction of the third edge direction, w4A weight value corresponding to the interpolation calculation result in the main direction of the fourth edge direction4And k is a specified parameter, and is an interpolation calculation result in the main direction of the fourth edge direction.
Assuming that the above is the sixteenth possible implementation manner, in a seventeenth possible implementation manner provided on the basis of the sixteenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the first position area is the upper left corner of the designated original pixel point, the second position area is the lower right corner of the designated original pixel point, and the third position area is the upper right corner or the lower left corner of the designated original pixel point.
Assuming that the above is a seventeenth possible implementation manner, in an eighteenth possible implementation manner provided on the basis of the seventeenth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the first edge direction is a 45 degree direction, the second edge direction is a 135 degree direction, the third edge direction is a 0 degree direction, and the fourth edge direction is a 90 degree direction.
The computer-readable storage medium provided by the embodiment of the invention comprehensively considers the directionality between the interpolation pixel point and the original pixel point when calculating the actual pixel value of each interpolation pixel point on the amplified image, and adopts different calculation modes to calculate according to the position relationship between each interpolation pixel point and the original pixel point, thereby obviously improving the definition of the image and ensuring better quality of the processed image.
The embodiment of the invention provides a graphical user interface, which is used on an image processing terminal, wherein the image processing terminal comprises a touch screen display, a memory and one or more processors for executing one or more programs; the graphical user interface includes:
acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when the appointed original pixel point is amplified by a preset multiple;
determining the position relation between an interpolation pixel point to be processed and an appointed original pixel point according to the position coordinates;
calculating the actual pixel value of the interpolation pixel point to be processed by adopting a calculation mode corresponding to the position relation according to the position relation between the interpolation pixel point to be processed and the appointed original pixel point;
wherein the different positional relationships correspond to different calculation manners.
According to the graphical user interface provided by the embodiment of the invention, when the actual pixel value of each interpolation pixel point on the amplified image is calculated, the directionality between the interpolation pixel point and the original pixel point is comprehensively considered, and different calculation modes are adopted for calculation according to the position relation between each interpolation pixel point and the original pixel point, so that the definition of the image is obviously improved, and the quality of the processed image is better.
It should be noted that: in the image processing apparatus provided in the above embodiment, when processing an image, only the division of the above functional modules is taken as an example, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the image processing apparatus is divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (14)

1. An image processing method, characterized in that the method comprises:
acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when an appointed original pixel point is amplified by a preset multiple;
determining the position relation between the interpolation pixel point to be processed and the appointed original pixel point according to the position coordinate;
if the interpolation pixel point to be processed is located in the first position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode;
if the interpolation pixel point to be processed is located in the second position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode;
if the interpolation pixel point to be processed is located in the third position area of the appointed original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a third calculation mode;
wherein the different positional relationships correspond to different calculation modes;
if the interpolation pixel point to be processed is located in the second position area of the specified original pixel point, calculating the actual pixel value of the interpolation pixel point to be processed by adopting a second calculation mode, wherein the calculation mode comprises the following steps:
acquiring a first edge direction and a second edge direction of the interpolation pixel point to be processed; selecting a plurality of original pixel points on the amplified image by taking the interpolation pixel points to be processed as the center; determining the strength values of the interpolation pixel points to be processed in the first edge direction and the second edge direction according to the original pixel points; judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction;
if the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from the plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have a first strong edge direction, acquiring original pixel points positioned in the main direction of the first edge direction and the main direction of the second edge direction from a plurality of original pixel points, respectively carrying out interpolation calculation on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, acquiring interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and carrying out weighting operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction to obtain an actual pixel value of the interpolation pixel point to be processed.
2. The method according to claim 1, wherein if the to-be-processed interpolation pixel is located in the first location area of the designated original pixel, calculating the actual pixel value of the to-be-processed interpolation pixel by using a first calculation method, includes:
acquiring a pixel value of an original pixel point closest to the interpolation pixel point to be processed, and taking the pixel value of the original pixel point as an actual pixel value of the interpolation pixel point to be processed; or;
and acquiring pixel values of two original pixel points which are closest to the interpolation pixel point to be processed, and performing interpolation calculation on the pixel values of the two original pixel points by adopting a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
3. The method of claim 1, wherein determining the intensity values of the to-be-processed interpolated pixel points in the first edge direction and the second edge direction according to the plurality of original pixel points comprises:
respectively obtaining color values of two adjacent original pixel points in the first edge direction and the second edge direction;
obtaining a difference value between color values of two adjacent original pixel points in the first edge direction and the second edge direction;
acquiring an absolute value of a difference value between color values of two adjacent original pixel points in the first edge direction and the second edge direction;
and taking the sum of absolute values of differences between the color values of two adjacent original pixel points in the first edge direction as the intensity value in the first edge direction, and taking the sum of absolute values of differences between the color values of two adjacent original pixel points in the second edge direction as the intensity value in the second edge direction.
4. The method according to claim 1, wherein if the to-be-processed interpolation pixel point is located in the third location area of the designated original pixel point, calculating the actual pixel value of the to-be-processed interpolation pixel point by using a third calculation method, including:
acquiring a third edge direction and a fourth edge direction of the interpolation pixel point to be processed;
selecting a plurality of pixel points on the amplified image by taking the interpolation pixel points to be processed as a center, wherein the pixel points comprise original pixel points and interpolation pixel points;
determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points;
judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction;
if the interpolation pixel point to be processed has the second strong edge direction, acquiring the pixel point in the second strong edge direction according to the plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed;
if the interpolation pixel point to be processed does not have the second strong edge direction, acquiring pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction according to the plurality of pixel points, respectively performing interpolation calculation on the pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction, acquiring interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and performing weighting operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction to obtain an actual pixel value of the interpolation pixel point to be processed.
5. The method according to claim 4, wherein said obtaining the pixel point in the second strong edge direction according to the plurality of pixel points comprises:
if the original pixel points exist in the plurality of pixel points in the second strong edge direction, the original pixel points in the second strong edge direction are obtained;
and if the plurality of pixel points do not have original pixel points in the second strong edge direction, obtaining interpolation pixel points in the second strong edge direction.
6. The method according to claim 1 or 3, wherein the first position region is the upper left corner of the designated original pixel point, the second position region is the lower right corner of the designated original pixel point, and the third position region is the upper right corner or the lower left corner of the designated original pixel point;
the first edge direction is a 45 degree direction and the second edge direction is a 135 degree direction.
7. The method according to claim 4 or 5, wherein the third edge direction is a 0 degree direction and the fourth edge direction is a 90 degree direction.
8. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the position coordinates of any interpolation pixel point to be processed on the amplified image, wherein the interpolation pixel point to be processed is a pixel point generated when the appointed original pixel point is amplified by a preset multiple;
the determining module is used for determining the position relation between the interpolation pixel point to be processed and the appointed original pixel point according to the position coordinate;
the calculation module is used for calculating the actual pixel value of the interpolation pixel point to be processed by adopting a first calculation mode when the interpolation pixel point to be processed is positioned in the first position area of the appointed original pixel point;
the calculation module is further configured to calculate an actual pixel value of the interpolation pixel point to be processed in a second calculation manner when the interpolation pixel point to be processed is located in the second position region of the designated original pixel point;
the calculation module is further configured to calculate an actual pixel value of the to-be-processed interpolation pixel point by using a third calculation method when the to-be-processed interpolation pixel point is located in a third position region of the designated original pixel point;
wherein the different positional relationships correspond to different calculation modes;
when the interpolation pixel point to be processed is located in the second position region of the designated original pixel point, the calculation module is further configured to obtain a first edge direction and a second edge direction of the interpolation pixel point to be processed; selecting a plurality of original pixel points on the amplified image by taking the interpolation pixel points to be processed as the center; determining the strength values of the interpolation pixel points to be processed in the first edge direction and the second edge direction according to the original pixel points; judging whether the interpolation pixel point to be processed has a first strong edge direction or not according to the strength values of the interpolation pixel point to be processed in the first edge direction and the second edge direction; when the interpolation pixel point to be processed has a first strong edge direction, acquiring an original pixel point positioned in the first strong edge direction from the plurality of original pixel points, and performing interpolation calculation on the original pixel point in the first strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed; when the interpolation pixel point to be processed does not have a first strong edge direction, acquiring original pixel points positioned in the main direction of the first edge direction and the main direction of the second edge direction from a plurality of original pixel points, respectively performing interpolation calculation on the original pixel points in the main direction of the first edge direction and the main direction of the second edge direction, acquiring interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction, and performing weighting operation on the interpolation calculation results in the main direction of the first edge direction and the main direction of the second edge direction to obtain an actual pixel value of the interpolation pixel point to be processed.
9. The apparatus according to claim 8, wherein the computing module is further configured to obtain a pixel value of an original pixel closest to the to-be-processed interpolation pixel, and use the pixel value of the original pixel as an actual pixel value of the to-be-processed interpolation pixel; or;
the calculation module is further configured to obtain pixel values of two original pixel points closest to the interpolation pixel point to be processed, and perform interpolation calculation on the pixel values of the two original pixel points by using a bilinear interpolation algorithm to obtain an actual pixel value of the interpolation pixel point to be processed.
10. The apparatus of claim 8, wherein the computing module is further configured to obtain color values of two adjacent original pixel points in the first edge direction and the second edge direction, respectively; obtaining a difference value between color values of two adjacent original pixel points in the first edge direction and the second edge direction; acquiring an absolute value of a difference value between color values of two adjacent original pixel points in the first edge direction and the second edge direction; and taking the sum of absolute values of differences between the color values of two adjacent original pixel points in the first edge direction as the intensity value in the first edge direction, and taking the sum of absolute values of differences between the color values of two adjacent original pixel points in the second edge direction as the intensity value in the second edge direction.
11. The apparatus according to claim 8, wherein when the to-be-processed interpolation pixel is located in a third location area of the designated original pixel, the calculation module is further configured to obtain a third edge direction and a fourth edge direction of the to-be-processed interpolation pixel; selecting a plurality of pixel points on the amplified image by taking the interpolation pixel points to be processed as a center, wherein the pixel points comprise original pixel points and interpolation pixel points; determining the intensity values of the interpolation pixel points to be processed in the third edge direction and the fourth edge direction according to the plurality of pixel points; judging whether the interpolation pixel point to be processed has a second strong edge direction or not according to the intensity values of the interpolation pixel point to be processed in the third edge direction and the fourth edge direction; when the interpolation pixel point to be processed has the second strong edge direction, acquiring the pixel point in the second strong edge direction according to the plurality of pixel points, and performing interpolation calculation on the pixel point in the second strong edge direction to obtain an actual pixel value of the interpolation pixel point to be processed; when the interpolation pixel point to be processed does not have the second strong edge direction, acquiring pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction according to the plurality of pixel points, respectively performing interpolation calculation on the pixel points in the main direction of the third edge direction and the main direction of the fourth edge direction, acquiring interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction, and performing weighting operation on the interpolation calculation results in the main direction of the third edge direction and the main direction of the fourth edge direction to obtain an actual pixel value of the interpolation pixel point to be processed.
12. The apparatus according to claim 11, wherein the computing module is further configured to obtain an original pixel point in the second strong edge direction when the original pixel points exist in the second strong edge direction for the plurality of pixel points; and when the plurality of pixel points do not have original pixel points in the second strong edge direction, obtaining interpolation pixel points in the second strong edge direction.
13. A terminal, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 7.
14. A computer-readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 7.
CN201510884587.1A 2015-12-04 2015-12-04 Image processing method and device Active CN105447819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510884587.1A CN105447819B (en) 2015-12-04 2015-12-04 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510884587.1A CN105447819B (en) 2015-12-04 2015-12-04 Image processing method and device

Publications (2)

Publication Number Publication Date
CN105447819A CN105447819A (en) 2016-03-30
CN105447819B true CN105447819B (en) 2020-01-10

Family

ID=55557953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510884587.1A Active CN105447819B (en) 2015-12-04 2015-12-04 Image processing method and device

Country Status (1)

Country Link
CN (1) CN105447819B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204444B (en) * 2016-06-29 2019-06-25 青岛海信移动通信技术股份有限公司 A kind of method and apparatus of image amplification
CN106373090B (en) * 2016-08-31 2019-11-15 广州视睿电子科技有限公司 Image processing method and device
CN106780383B (en) * 2016-12-13 2019-05-24 长春理工大学 The depth image enhancement method of TOF camera
CN107610052A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of super resolution ratio reconstruction method and mobile terminal
CN108200470A (en) * 2017-12-20 2018-06-22 中国联合网络通信集团有限公司 A kind of video code rate method of adjustment and equipment
CN108401125B (en) * 2018-02-28 2020-04-21 京东方科技集团股份有限公司 Video data processing method, device and storage medium
CN111105358B (en) * 2018-10-26 2023-06-09 阿里巴巴集团控股有限公司 Image processing method and device
CN110363723B (en) * 2019-07-16 2021-06-29 安健科技(广东)有限公司 Image processing method and device for improving image boundary effect
CN113808012A (en) * 2020-06-17 2021-12-17 京东方科技集团股份有限公司 Image processing method, computer device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272491A (en) * 2008-04-10 2008-09-24 西安电子科技大学 Image interpolation method based on video object and area guidance
CN102609900A (en) * 2012-01-16 2012-07-25 北京环星基业数码科技发展有限公司 Image interpolation algorithm based on region consistency
CN102629366A (en) * 2012-03-05 2012-08-08 许丹 Image processing method and apparatus thereof
CN103646379A (en) * 2013-11-28 2014-03-19 青岛海信信芯科技有限公司 A method and an apparatus for amplifying images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272491A (en) * 2008-04-10 2008-09-24 西安电子科技大学 Image interpolation method based on video object and area guidance
CN102609900A (en) * 2012-01-16 2012-07-25 北京环星基业数码科技发展有限公司 Image interpolation algorithm based on region consistency
CN102629366A (en) * 2012-03-05 2012-08-08 许丹 Image processing method and apparatus thereof
CN103646379A (en) * 2013-11-28 2014-03-19 青岛海信信芯科技有限公司 A method and an apparatus for amplifying images

Also Published As

Publication number Publication date
CN105447819A (en) 2016-03-30

Similar Documents

Publication Publication Date Title
CN105447819B (en) Image processing method and device
CN110136183B (en) Image processing method and device and camera device
US10497097B2 (en) Image processing method and device, computer readable storage medium and electronic device
KR101859312B1 (en) Image processing method and apparatus, and computer device
CN107093418B (en) Screen display method, computer equipment and storage medium
CN109863742B (en) Image processing method and terminal device
CN106412691B (en) Video image intercepting method and device
TWI546775B (en) Image processing method and device
WO2022100048A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
CN112055875B (en) Partial image frame update system and method for electronic display
WO2016173350A1 (en) Picture processing method and device
CN105989572B (en) Picture processing method and device
CN106296634B (en) A kind of method and apparatus detecting similar image
CN107895352A (en) A kind of image processing method and mobile terminal
CN112532958B (en) Image processing method, device, electronic equipment and readable storage medium
CN107734281A (en) A kind of image magnification method and mobile terminal
CN108038834A (en) A kind of method, terminal and computer-readable recording medium for reducing noise
CN106296640B (en) A kind of method and apparatus identifying blurred picture
WO2022142876A1 (en) Image processing method and apparatus, electronic device and storage medium
CN112330564B (en) Image processing method, device, electronic equipment and readable storage medium
CN108564539B (en) Method and device for displaying image
CN112905132A (en) Screen projection method and equipment
CN105187831B (en) A kind of method and apparatus that image is compressed
CN107944438B (en) Document image correction method and mobile terminal
CN116092434B (en) Dimming method, dimming device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant