CN117726557A - Image processing method, electronic device and storage medium - Google Patents

Image processing method, electronic device and storage medium Download PDF

Info

Publication number
CN117726557A
CN117726557A CN202310558777.9A CN202310558777A CN117726557A CN 117726557 A CN117726557 A CN 117726557A CN 202310558777 A CN202310558777 A CN 202310558777A CN 117726557 A CN117726557 A CN 117726557A
Authority
CN
China
Prior art keywords
interpolated
pixel point
pixel
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310558777.9A
Other languages
Chinese (zh)
Inventor
雷财华
向超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310558777.9A priority Critical patent/CN117726557A/en
Publication of CN117726557A publication Critical patent/CN117726557A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides an image processing method, electronic equipment and a storage medium, and relates to the field of images. The method comprises the following steps: determining the edge direction of each pixel point to be interpolated according to the gradient direction of each pixel point to be interpolated corresponding to the original image; determining the gray value of each pixel point to be interpolated; and interpolating on the basis of the gray value of the pixel point to be interpolated in the edge direction of each pixel point to be interpolated to obtain a target image. According to the method, the edge direction and the gray value of each pixel point to be interpolated are determined according to the gradient direction of each pixel point to be interpolated corresponding to an original image, and the edge characteristics of the image are considered in the process. And interpolation is carried out on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated, so that more image details of the processed target image can be reserved, the real contour information of the image can be reflected, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless.

Description

Image processing method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of images, and in particular, to an image processing method, an electronic device, and a storage medium.
Background
With the advent of digital images and wide application, people have also increased requirements on image quality, and image resolution is improved in different application scenes, so that the digital image has become one of hot spot problems in the field of image processing.
Currently, when an image is scaled, a conventional image processing method, such as bilinear interpolation, is generally used. However, when the image is processed by adopting a traditional image processing method, the processed image is easy to blur, and the quality of the image is seriously affected.
Disclosure of Invention
The image processing method, the electronic equipment and the storage medium can allow the edge characteristics of the image to be considered in the process of processing the image, so that the processed target image can retain more image details, the true contour information of the image can be reflected, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless.
In a first aspect, the present application provides an image processing method, including: determining the edge direction of each pixel point to be interpolated according to the gradient direction of each pixel point to be interpolated corresponding to the original image, and determining the gray value of each pixel point to be interpolated; and interpolating on the basis of the gray value of the pixel point to be interpolated in the edge direction of each pixel point to be interpolated to obtain a target image corresponding to the original image.
Wherein the edge direction is perpendicular to the gradient direction.
The original image may be an RGB image, a YUV image, or a gray scale image.
The gradient direction of the pixel point to be interpolated represents the changing direction of the gray value of the pixel point to be interpolated. For the pixel point to be interpolated, the gray value along the gradient direction changes the fastest and the gray value along the edge direction changes the slowest. Therefore, if interpolation is performed in the edge direction, the higher the continuity of the interpolated image is, the smaller the interpolation loss is, so that more image details are reserved, and the finally obtained target image is clear, lossless and high in quality.
It should be understood that, in the edge direction of each pixel to be interpolated, when interpolation is performed based on the gray value of the pixel to be interpolated, different interpolation modes are corresponding to different image processing requirements.
Optionally, when the image needs to be enlarged, performing up interpolation on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated, so as to obtain a target image corresponding to the original image.
Optionally, when the image needs to be reduced, performing lower interpolation on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated, so as to obtain a target image corresponding to the original image.
According to the image processing method provided by the first aspect, the edge direction and the gray value of each pixel to be interpolated are determined according to the gradient direction of each pixel to be interpolated corresponding to the original image, and the gray value in the gradient direction changes the slowest when the gray value in the gradient direction changes the fastest because the gradient direction is perpendicular to the edge direction. Based on the method, interpolation is carried out on the gray value of each pixel point to be interpolated in the edge direction of each pixel point to be interpolated, so that the continuity of gray value change of the image after interpolation can be improved, the interpolation loss is smaller, and further more image details of the target image obtained after processing are reserved.
Because the edge characteristics of the image are considered, the image processing method provided by the application combines the edge characteristics of the image in the process of processing the image, so that the processed target image can reflect the real contour information of the image, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless. Compared with the related art, the method effectively avoids the phenomenon of streak (or jaggy) of the processed image.
In a possible implementation manner, the image processing method provided by the application further includes:
And determining the gradient direction of each pixel point to be interpolated corresponding to the original image.
In the implementation manner, the gradient direction of each pixel point to be interpolated corresponding to the original image is determined, so that the edge direction can be quickly and accurately determined according to the gradient direction.
In a possible implementation manner, determining a gradient direction of each pixel point to be interpolated corresponding to the original image includes: determining a gray gradient component of each pixel point to be interpolated; vector synthesis is carried out on a plurality of gray gradient components of each pixel point to be interpolated, and gray gradient of each pixel point to be interpolated is obtained.
The gray gradient component indicates a gray gradient of the pixel to be interpolated in a first direction, wherein the first direction is a direction in which adjacent pixels of the pixel to be interpolated point to the pixel to be interpolated, and the gray gradient comprises a gradient direction.
In the implementation mode, the gray gradient of the pixel point to be interpolated is obtained by vector synthesis through the plurality of gray gradient components of the pixel point to be interpolated, and because the adjacent pixel points are distributed in different directions, the gray change characteristics of the pixel point to be interpolated in different directions can be considered through the mode of vector synthesis through the plurality of gray gradient components, the determined gradient direction of the pixel point to be interpolated is more accurate, and the edge direction of the pixel point to be interpolated can be accurately determined based on the gradient direction.
In a possible implementation manner, determining a gradient direction of each pixel point to be interpolated corresponding to the original image includes: determining a weight value between each pixel point to be interpolated and each adjacent pixel point, determining a gray gradient component of each pixel point to be interpolated according to each weight value, and carrying out vector synthesis on a plurality of gray gradient components of each pixel point to be interpolated to obtain the gray gradient of each pixel point to be interpolated.
In the implementation manner, when the gray gradient of the pixel point to be interpolated is determined, not only the gray change characteristics of the adjacent pixel points of the pixel point to be interpolated in different directions are considered, but also the gray values of the adjacent pixel points are adjusted based on the weight values corresponding to the different adjacent pixel points, so that the determined gray gradient component is more accurate, and the gradient direction synthesized based on the gray gradient component is also more accurate.
In a possible implementation manner, determining a gray gradient component of each pixel point to be interpolated includes:
for each pixel point to be interpolated, determining a plurality of adjacent pixel points adjacent to the pixel point to be interpolated in the original image and a gray value of each adjacent pixel point; and calculating a plurality of gray gradient components of the pixel point to be interpolated based on the initial gray value of the pixel point to be interpolated and the gray values of a plurality of adjacent pixel points.
In the implementation mode, the gray gradient component of the pixel point to be interpolated is determined according to a plurality of adjacent pixel points adjacent to the pixel point to be interpolated and the gray value of each adjacent pixel point, and the gray change characteristics of the adjacent pixel points of the pixel point to be interpolated in different directions are considered in the process, so that the determined gray gradient component is more accurate, and the gradient direction of the pixel point to be interpolated is accurately synthesized according to the gray gradient component.
In a possible implementation manner, determining a gray value of each pixel point to be interpolated includes: determining an edge direction angle of each pixel point to be interpolated; and calculating the gray value of each pixel point to be interpolated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated.
The edge direction angle is an included angle formed by an x-axis and a straight line where the edge direction of the pixel point to be interpolated is located, and the x-axis direction is used as a standard direction.
Optionally, the slope of the line in which the edge direction is located may be determined, so any included angle formed by the x-axis and the line in which the edge direction of the pixel point to be interpolated is located may be used as the slope of the line in which the edge direction is located as long as the slope of the line in which the edge direction is located can be determined.
Optionally, the determining the edge direction angle of each pixel to be interpolated may be determining a gradient direction angle of each pixel to be interpolated, and determining the edge direction angle of each pixel to be interpolated according to the gradient direction angle of each pixel to be interpolated.
The gradient direction angle is an included angle between the pixel point to be interpolated and the gradient direction in the horizontal direction.
In the implementation manner, the gray value of each pixel point to be interpolated is accurately calculated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated. Therefore, when interpolation is carried out based on the gray value, the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, and more image details are reserved, so that the finally obtained target image is clear, lossless and high in quality.
In a possible implementation manner, calculating the gray value of each pixel to be interpolated according to the gray value of each adjacent pixel, the edge direction of each pixel to be interpolated, and the edge direction angle of each pixel to be interpolated includes:
for each pixel point to be interpolated, determining a first intersection point and a second intersection point according to the edge direction of the pixel point to be interpolated and a plurality of adjacent pixel points; determining a first distance and a second distance according to the edge direction angle of the pixel point to be interpolated; calculating gray values of the first intersection point and the second intersection point based on the gray value of each adjacent pixel point; and calculating the gray value of the pixel point to be interpolated according to the first distance, the second distance, the gray value of the first intersection point and the gray value of the second intersection point.
The first intersection point, the second intersection point and the pixel point to be interpolated are positioned on a straight line in the edge direction, wherein the first distance represents the distance between the pixel point to be interpolated and the first intersection point, and the second distance represents the distance between the pixel point to be interpolated and the second intersection point.
Optionally, the gray value of the first intersection point is calculated by a first linear interpolation polynomial based on the gray value of each neighboring pixel point.
Alternatively, the gray value of the second intersection point is calculated by a second linear interpolation polynomial based on the gray value of each neighboring pixel point.
In the implementation manner, the gray value of each pixel point to be interpolated is accurately calculated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated. Therefore, when interpolation is carried out based on the gray value, the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, and more image details are reserved, so that the finally obtained target image is clear, lossless and high in quality.
In a possible implementation manner, the image processing method provided by the application further includes:
for each pixel point to be interpolated, determining the gray value of the pixel point to be interpolated as the initial gray value of the pixel point to be interpolated; calculating a plurality of gray gradient components of the pixel point to be interpolated based on the initial gray value and the gray values of a plurality of adjacent pixel points; determining gray gradient of pixel points to be interpolated based on the gray gradient components; determining the edge direction of the pixel point to be interpolated according to the gray gradient; determining an edge direction angle of a pixel point to be interpolated; calculating the gray value of the pixel to be interpolated according to the gray value of each adjacent pixel, the edge direction of the pixel to be interpolated and the edge direction angle of the pixel to be interpolated; repeating the steps, when the edge direction angle converges, interpolating on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated to obtain a target image corresponding to the original image, wherein the steps comprise: and interpolating according to the gray value of the pixel point to be interpolated, which is obtained when the edge direction angle converges, in the edge direction of each pixel point to be interpolated, so as to obtain a target image.
In the implementation manner, the determined gradient direction, the edge direction and the gray value of the pixel point to be interpolated are more accurate due to continuous iteration, so that more details and more accuracy of the image after interpolation can be reserved when interpolation is carried out based on the edge direction and the gray value after iteration, and the quality of the target image is further improved.
In a possible implementation manner, the image processing method provided in the application further includes, before determining an edge direction of each pixel to be interpolated according to a gradient direction of each pixel to be interpolated corresponding to an original image:
detecting an operation for zooming;
in response to the operation, determining a scale of the original image; and determining each pixel point to be interpolated corresponding to the original image according to the scaling and the interpolation algorithm.
The operation for zooming may be an enlargement operation, a reduction operation.
If the operation for zooming is generated by a user touching the screen, the zoom scale may be determined by detecting the position of the user's finger touch. If the operation for zooming is electronic device triggered, the zoom scale may be determined by the screen resolution of the electronic device.
In the implementation mode, each pixel point to be interpolated corresponding to the original image is determined through a traditional interpolation algorithm, and an initial gray value of each pixel point to be interpolated is determined, so that a foundation is provided for the subsequent rapid and accurate determination of the gradient direction and the edge direction of the pixel point to be interpolated and the gray value of the pixel point to be interpolated, and therefore the interpolation is carried out based on the gradient direction of the pixel point to be interpolated and the gray value of the pixel point to be interpolated, and then a high-quality image is obtained.
In a second aspect, the present application provides an electronic device, the electronic device comprising: one or more processors; one or more memories; a module in which a plurality of application programs are installed; the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the method of the first aspect and any possible implementation thereof.
In a third aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Optionally, the chip further comprises a communication interface.
In a fourth aspect, the present application provides a computer readable storage medium having stored therein a computer program which, when executed by a processor, causes the processor to perform the method of the first aspect and any possible implementation thereof.
In a fifth aspect, the present application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform the method of the first aspect and any possible implementation thereof.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
FIG. 1 is a schematic view of an enlarged image scene shown in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of another magnified image scene shown in an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a hardware architecture of an electronic device according to an exemplary embodiment of the present application;
fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 5 is an interpolation diagram of an enlarged image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a bilinear interpolation algorithm provided by an embodiment of the present application;
fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 8 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a gray gradient component provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of vector synthesis according to an embodiment of the present application;
fig. 11 is a schematic flowchart of calculating a gray value of a pixel to be interpolated according to an embodiment of the present application;
FIG. 12 is a schematic view of an edge direction angle provided in an embodiment of the present application;
FIG. 13 is a schematic illustration of an intersection point provided in an embodiment of the present application;
FIG. 14 is another schematic view of an intersection point provided by an embodiment of the present application;
FIG. 15 is a schematic view of yet another intersection point provided by an embodiment of the present application;
FIG. 16 is a schematic illustration of yet another intersection provided by an embodiment of the present application;
FIG. 17 is a schematic illustration of yet another intersection provided by an embodiment of the present application;
FIG. 18 is a schematic illustration of yet another intersection provided by an embodiment of the present application;
fig. 19 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 20 is a schematic view of edge direction angle convergence provided by an embodiment of the present application;
FIG. 21 is a schematic view of a scene of an enlarged image using the image processing method provided herein;
fig. 22 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
It should be noted that, the image processing method provided in the embodiment of the present application may be applicable to any electronic device having an image processing function.
In some embodiments of the present application, the electronic device may be a mobile phone, a tablet computer, a wearable device, a television, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), or the like, may be a single-lens reflex camera, a card machine, or other devices or apparatuses capable of performing image processing, and the embodiments of the present application do not limit any specific types of electronic devices.
In order to better understand the image processing method provided in the embodiments of the present application, some terms related in the embodiments of the present application are explained first below to facilitate understanding by those skilled in the art.
1. Bilinear interpolation
Also known as bilinear interpolation. Mathematically, bilinear interpolation is a linear interpolation extension of an interpolation function with two variables, the core idea of which is to perform linear interpolation once in two directions, respectively.
In image processing, bilinear interpolation is one of image interpolation algorithms, which is represented by a weighted calculation of the pixel values of the 4 pixels nearest to the pixel to be calculated.
2. Gray scale
The black is used as a reference color, and the images are displayed in black with different saturation. Each gray object has a luminance value from 0% (black) to 100% (white). Images generated using black and white or grayscale scanners are typically displayed in grayscale.
3. Gray scale gradient
The image is regarded as a two-dimensional discrete function, the gray gradient is actually the derivative of the two-dimensional discrete function, and the difference is used for replacing the differential to obtain the gray gradient of the image.
4. Gradient direction
In mathematics, the intention of a gradient is a vector that means that the directional derivative of the function takes a maximum along that direction at a point, i.e. the function changes the fastest along that direction at that point, with the greatest rate of change. The degree and direction of the change of the gray value are represented by gradients in the image.
5. Edge direction
Edges in an image refer to the locations where the gray scale intensity changes the least. The gradient direction is in a perpendicular relationship with the edge direction.
6. Gray scale image
A gray scale image is an image with only one sampling color per pixel. Such images are typically displayed in gray scale from darkest black to brightest white. Gray scale images are different from black and white images, and in the field of computer images, black and white images only have two colors, and gray scale images have a plurality of levels of color depth between black and white.
7. RGB (Red, green, blue) image
RGB refers to a color pattern associated with the human visual system architecture. All colors are considered to be different combinations of red, green and blue depending on the structure of the human eye.
An RGB image refers to an image displayed in an RGB color mode.
8. YUV image
YUV refers to a color coding method, where Y represents luminance (luminance or luma), and U and V represent chrominance (chroma). The above RGB color modes focus on the color sensing of human eyes, and the YUV color modes focus on the sensitivity of vision to brightness, and the RGB color modes and the YUV color modes can be mutually converted.
YUV images refer to images displayed in YUV color mode. If an image has only a Y component and no UV component, the image represents a black and white image.
The foregoing is a simplified description of the terminology involved in the embodiments of the present application, and is not described in detail below.
With the development of digital images, people use the images more frequently, and meanwhile, the requirements on the image quality are higher.
For example, scenes that require zooming in or out of an image are often encountered during use of electronic devices (e.g., cell phones), where it is desirable that the zoomed in or out image remain of high quality. However, at present, when an image is subjected to scaling processing, a conventional image processing method is generally adopted. The conventional image processing method includes nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, multiphase interpolation, and the like. Since the edge characteristics of the image are not considered in the process of processing the image by using these interpolation methods. For example, the pixel point to be solved is taken as the center, and the characteristics of the pixel point to be solved (such as the change of pixel gray scale, texture and the like) in all directions are usually different, but in the process of processing the image by adopting the traditional interpolation methods, the isotropy principle is utilized for processing, and the fact that the characteristics of the pixel point to be solved in all directions are different is not considered, so that the processed image is blurred, the details of the image are unclear, and the quality of the image is seriously influenced.
In some possible application scenarios, when a user previews an image in a gallery application of a mobile phone, the image may be zoomed in or zoomed out. When the mobile phone responds to the operation of amplifying or shrinking the image by a user, the image is usually processed by adopting a traditional image processing method, so that the amplified or shrinking image has the phenomenon of stripes (or saw teeth), the details of the amplified or shrinking image are unclear, the true contour information of the image can not be reflected, and the quality of the amplified or shrinking image is seriously influenced.
An application scenario of magnifying a certain image in the gallery application will be described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic diagram of an enlarged image scene according to an exemplary embodiment of the present application.
In the embodiment of the application, the electronic device is taken as an example of a mobile phone. As shown in fig. 1, the main interface of the mobile phone shown in fig. 1 (a), for example, the user clicks the gallery icon 101 in the main interface of the mobile phone, and the display interface of the mobile phone jumps from the main interface shown in fig. 1 (a) to the gallery display interface shown in fig. 1 (b). The gallery display interface displays a plurality of thumbnails. For example, the user clicks on an image 102 of the plurality of thumbnails shown in fig. 1 (b), and the cell phone display jumps from the gallery display interface shown in fig. 1 (b) to the image display interface shown in fig. 1 (c). It should be understood that when viewing an image in the gallery, if the image is a non-full scale image, the non-image content area is displayed as a white background area or a black background area in the image display interface, such as the white background area 103 shown in fig. 1 (c). The image 102 in the thumbnail is displayed in original image on the image display interface, as the image 104 shown in fig. 1 (c).
Wherein, the non-full-screen proportion image refers to that the display proportion of the image displayed on the display interface is not full screen.
As shown in fig. 1 (c), when the partial image 105 in the image 104 is desired to be enlarged, the user can simultaneously touch the area where the partial image 105 is located with two fingers and slide outward, i.e., enlarge the partial image 105. As shown in fig. 1 (d), the image 106 is an enlarged image of the partial image 105.
When the mobile phone responds to the amplifying operation of the user on the local image 105, if the local image 105 is processed by adopting a traditional image processing method, the amplified image 106 is caused to have a stripe (or saw tooth) phenomenon, so that details of the amplified image 106 are unclear, the true contour information of the image cannot be reflected, and the quality of the amplified image is seriously affected.
For ease of understanding, referring to fig. 2, fig. 2 is a schematic diagram of another enlarged image scene shown in an exemplary embodiment of the present application.
As shown in fig. 2, assuming that a certain image in the thumbnail is displayed in original image, it is displayed as an image 201 shown in fig. 2 (a), when the user wants to perform the zoom-in operation on the partial image 202 in the image 201, the user can touch the area where the partial image 202 is located at the same time with two fingers and slide outward, that is, zoom-in the partial image 202. As shown in fig. 2 (b), the image 203 is an enlarged image of the partial image 202.
When the local image 202 is amplified by adopting a traditional image processing method, the edge characteristic of the local image 202 is not considered, so that the obtained amplified image 203 has a stripe (or saw tooth) phenomenon, the details of the amplified image 203 are unclear, the true contour information of the image cannot be reflected, and the quality of the amplified image is seriously affected.
The above description of the application scenario of amplifying an image by using a traditional image processing method shows that the image is amplified by using the traditional image processing method, and the obtained amplified image has a stripe (or saw tooth) phenomenon, so that details of the amplified image are unclear, real contour information of the image cannot be reflected, and the quality of the amplified image is seriously affected. It should be understood that when the image is reduced by adopting the traditional image processing method, details of the reduced image are unclear, so that true contour information of the image cannot be reflected, and the quality of the reduced image is seriously affected.
This is because the edge characteristics of the image are not considered in the process of scaling the image using conventional image processing methods (such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, and polyphase interpolation).
In view of this, the embodiment of the present application provides an image processing method, in the process of processing an image, the image processing method determines an edge direction and a gray value of each pixel to be interpolated according to a gradient direction of each pixel to be interpolated corresponding to an original image, and when the gray value of the gradient direction changes fastest, the gray value of the corresponding edge direction changes slowest because the gradient direction is perpendicular to the edge direction. Based on the method, interpolation is carried out on the gray value of each pixel point to be interpolated in the edge direction of each pixel point to be interpolated, so that the continuity of gray value change of the image after interpolation can be improved, the interpolation loss is smaller, and further more image details of the target image obtained after processing are reserved.
Because the edge characteristics of the image are considered, the image processing method provided by the application combines the edge characteristics of the image in the process of processing the image, so that the processed target image can reflect the real contour information of the image, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless. Compared with the related art, the method effectively avoids the phenomenon of streak (or jaggy) of the processed image.
The hardware structure of the electronic device according to the embodiment of the present application will be briefly described below with reference to the accompanying drawings.
In some embodiments of the present application, the electronic device may be a mobile phone, a tablet computer, a wearable device, a television, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), or the like, may be a single-lens reflex camera, a card machine, or other devices or apparatuses capable of performing image processing, and the embodiments of the present application do not limit any specific types of electronic devices.
Referring to fig. 3, fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment of the present application. As shown in fig. 3, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 3, or electronic device 100 may include a combination of some of the components shown in FIG. 3, or electronic device 100 may include sub-components of some of the components shown in FIG. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In the embodiment of the present application, the processor 110 may perform the steps of determining an edge direction of each pixel to be interpolated according to a gradient direction of each pixel to be interpolated, determining a gray value of each pixel to be interpolated, and interpolating based on the gray value of each pixel to be interpolated in the edge direction of each pixel to be interpolated to obtain the target image. For example, the processor 110 may execute software code of the image processing method provided in the embodiments of the present application, so as to perform scaling processing on the image.
In an embodiment of the present application, the processor 110 may perform the steps of determining a scaling of the gray image in response to a scaling operation of a user, and determining each pixel to be interpolated corresponding to the gray image according to the scaling and an interpolation algorithm.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor (mobile industry processor interface, MIPI) interface, a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The connection relationship between the modules shown in fig. 3 is merely illustrative, and does not limit the connection relationship between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/2G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. Wireless communication techniques may include global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS). It is understood that in embodiments of the present application, a hardware module in a positioning or navigation system may be referred to as a positioning sensor.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. GPUs can also be used to perform mathematical and pose calculations, for graphics rendering, and the like. Processor 110 may include one or more GPUs, the execution of which may generate or change display information.
The display 194 may be used to display images or video and may also display a series of graphical user interfaces (graphical user interface, GUIs), all of which are home screens for the electronic device 100. Generally, the size of the display 194 of the electronic device 100 is fixed and only limited controls can be displayed in the display 194 of the electronic device 100. A control is a GUI element that is a software component contained within an application program that controls all data processed by the application program and interactive operations on that data, and a user can interact with the control by direct manipulation (direct manipulation) to read or edit information about the application program. In general, controls may include visual interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets (widgets), and the like.
In the present embodiment, the display 194 may be used to display various images before, during, and after processing.
The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N may be a positive integer greater than 1.
The display screen 194 in the embodiments of the present application may be a touch screen. The display 194 may have the touch sensor 180K integrated therein. The touch sensor 180K may also be referred to as a "touch panel". That is, the display screen 194 may include a display panel and a touch panel, and a touch screen, also referred to as a "touch screen", is composed of the touch sensor 180K and the display screen 194. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. After a touch operation detected by the touch sensor 180K, a driving (e.g., TP driving) of the kernel layer may be transferred to an upper layer to determine a touch event type. Visual output related to the touch operation may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
In the embodiment of the present application, the touch sensor 180K detects a touch operation by the user. For example, when a user views an image in a gallery application, both fingers simultaneously touch the display 194 and slide outward or pinch inward. At this time, the touch sensor 180K detects a touch operation of the user, and is transferred to an upper layer by a driving of a kernel layer (e.g., TP driving) to determine a touch event type, such as a zoom-in image event, a zoom-out image event, etc. The processor 110 finally provides visual output related to the touch operation through the display screen 194 in response to the touch operation of the user, such as the display screen 194 displaying an enlarged image, a reduced image, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system (such as an operating system in a software system shown in fig. 2), an APP (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on.
In addition, the internal memory 121 may include a high-speed random access memory, such as a memory in the hardware system shown in fig. 2; the internal memory 121 may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash memory (Universal Flash Storage, UFS), etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like.
In addition, above the above components, various types of operating systems are running. Such as Android (Android) systems, IOS operating systems, sambac (Symbian) operating systems, blackberry (Black Berry) operating systems, linux operating systems, windows operating systems, etc. This is merely illustrative and is not limiting. Different applications may be installed, run on these operating systems, such as any application that may display images and/or process images.
The image processing method provided in the embodiment of the present application may be implemented in the electronic device 100 having the above-described hardware structure.
The software structure of the electronic device according to the embodiments of the present application is briefly described above, and in the following embodiments of the present application, the electronic device is taken as an example, and the image processing method provided by the embodiments of the present application is specifically described with reference to the accompanying drawings and application scenarios.
Before introducing the image processing method, it is stated that the above scenario of performing an enlarging or reducing operation on the image when the image is previewed in the gallery application is an illustration of the application scenario, and does not limit the application scenario of the application. The image processing method provided by the embodiment of the application can be applied to but not limited to the following scenes:
the method comprises the steps of zooming images in various application programs when previewing images (such as zooming images in gallery application, zooming images when viewing images in social application, zooming head images when switching head images in application program, and the like), zooming videos in video calls, zooming videos in video conference application, zooming videos in long and short video application, zooming videos in video live broadcast class application, zooming videos in video net class application, zooming images in intelligent fortune mirror application scene, zooming videos when recording video by a system camera video recording function, zooming photos when shooting photos by a system camera shooting function, zooming images or videos in video monitoring, zooming images or videos in intelligent cat eye, and the like.
An application scenario in which an image is enlarged or reduced when the image is previewed in a gallery application will be described below as an example.
Referring to fig. 4, fig. 4 is a flowchart of an image processing method according to an embodiment of the present application. The method comprises the following steps:
s101, determining the scaling of the original image.
An operation for scaling is detected, or a scaling instruction for an original image is received, and a scaling scale corresponding to the original image is determined. In the embodiment of the present application, the original image may be an RGB image, a YUV image, a gray scale image, or the like, and this is not limited in terms of practical situations.
It should be noted that, when the original image is an RGB image, the subsequent scaling process is performed on the RGB image, that is, the scaling process is performed on the images of the R channel, the G channel, and the B channel, and then the processed images of the R channel, the G channel, and the B channel are combined, so as to obtain the processed RGB image. Similarly, when the original image is a YUV image, the subsequent scaling processing is performed on the YUV image, namely the scaling processing is performed on the images of the Y channel, the U channel and the V channel respectively, and then the processed images of the Y channel, the U channel and the V channel are combined to obtain the processed YUV image. When the original image is a gray image, the gray image belongs to a single-channel image, and the gray image can be directly scaled later to obtain the processed gray image.
The scale may be any multiple of the resolution of the original image. For example, the scale may be 0.2 times, 0.5 times, 1.5 times, 2 times, 2.5 times, etc. the resolution of the original image.
The scaling instruction may be an image processing instruction, which is used to perform any scaling on the original image. The zoom instruction may include an zoom-in instruction, a zoom-out instruction.
In one possible implementation, the zoom instruction may be triggered by the user, at which point the scale may be determined by detecting the position of the user's finger touch. For example, when a user views an image in a gallery application, two fingers simultaneously touch an area of the image displayed in the display screen and slide outward, triggering a zoom-in instruction. For another example, when a user views an image in a gallery application, two fingers simultaneously touch an area of the image displayed in the display screen and pinch inward, triggering a zoom-out instruction. For another example, when editing an image, the user clicks a zoom-in option (or a zoom-out option) in a display interface of the electronic device, triggering a zoom-in instruction (or a zoom-out instruction).
In another possible implementation, the scaling instruction may also be triggered automatically by the electronic device, in which case the scaling may be determined by the screen resolution of the electronic device. For example, when the resolution of the original image to be displayed is different from the screen resolution, and the original image is turned on, the electronic device automatically triggers an zoom-in instruction or a zoom-out instruction to adapt the resolution of the original image to be displayed to the screen resolution. For example, if the resolution of the original image to be displayed is lower than the resolution of the screen, the electronic device needs to perform a zoom-in process on the original image, i.e. trigger a zoom-in command, to convert the original image into an image with a resolution equal to the resolution of the screen. Similarly, if the resolution of the original image to be displayed is higher than the screen resolution, the electronic device needs to perform a reduction process on the original image, i.e. trigger a reduction instruction to convert the original image into an image with a resolution equal to the screen resolution.
It should be understood that the electronic device may display images or videos with different resolutions according to the actual requirements of the user, which is only illustrative and not limited in this respect. Likewise, the triggering condition of the scaling instruction is not limited in the embodiment of the present application.
S102, determining each pixel point to be interpolated corresponding to the original image according to the scaling and the interpolation algorithm.
The interpolation algorithm may include nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, multi-phase interpolation, and the like. In the embodiment of the application, the processing of the original image as the gray image by using the bilinear interpolation method will be described as an example.
Illustratively, the positions corresponding to the pixel points in the original image in the zoom image can be determined according to the zoom instruction. And assuming that the scaling instruction is an amplifying instruction and the scaling ratio is 2, determining the corresponding position of each pixel point in the amplifying image in the original image according to the scaling instruction.
The original image (assuming a resolution of a×b) is up-sampled 2 times by bilinear interpolation (also called interpolation or magnification processing) to obtain a magnified image (resolution of 2a×2b). The step of up-sampling the original image by 2 times by adopting the bilinear interpolation method means that a plurality of pixel points to be interpolated are inserted into the original image, wherein the pixel value of each pixel point to be interpolated can be calculated by substituting the pixel values of the plurality of original pixel points into an interpolation function.
It should be noted that, determining each pixel point to be interpolated corresponding to the original image is equivalent to determining the gray value and the coordinates of each pixel point to be interpolated.
For ease of understanding, please refer to fig. 5, fig. 5 is a schematic diagram illustrating interpolation of an enlarged image according to an embodiment of the present application.
The grayscale image shown in fig. 5 (a) is an unscaled grayscale image, and the grayscale image shown in fig. 5 (b) is an enlarged image, that is, a grayscale image obtained by enlarging the grayscale image shown in fig. 5 (a) by 2 times. As can be seen from fig. 5, each pixel point in the grayscale image shown in (a) in fig. 5 corresponds to each pixel point in the enlarged image shown in (b) in fig. 5, that is, the black pixel point in the grayscale image shown in (a) in fig. 5 corresponds to the black pixel point in the enlarged image shown in (b) in fig. 5 one by one. The gray pixel point in the enlarged image shown in fig. 5 (b) is the newly inserted pixel point to be interpolated. The pixel value of each pixel to be interpolated is calculated by substituting the pixel values of a plurality of original pixels (i.e., black pixels in the gray-scale image shown in fig. 5 (a)) into the interpolation function.
How to determine each pixel point to be interpolated corresponding to the original image is described below.
For ease of understanding, referring to fig. 6, fig. 6 is a schematic diagram of a bilinear interpolation algorithm according to an embodiment of the present application.
As shown in fig. 6, for each pixel to be interpolated, in this example, the pixel to be interpolated is an O-point, and the pixel to be interpolated is determined to be at the point according to the scalingAdjacent (or nearest) four adjacent pixels in the non-scaled gray scale image, P respectively A 、P B 、P C 、P D . The coordinates of the four adjacent pixel points are P respectively A (x,y)、P B (x+1,y)、P C (x,y+1)、P D (x+1,y+1)。
As shown in FIG. 6, P A 、P B 、P C 、P D Four adjacent pixel points are enclosed to form a square, the O point is used as a horizontal line, the horizontal line is intersected with the square to form two intersection points, namely an intersection point P row_1 Intersection point P row_2 . Similarly, the O-passing point is a vertical line which is intersected with the square to form two intersection points, namely an intersection point P col_1 Intersection point P col_2
As shown in FIG. 6, P B With the intersection point P row_2 The distance between them is q, and the intersection point P row_2 And P D The distance between them is 1-q, P C With the intersection point P col_2 The distance between the two points is P, and the intersection point P col_2 And P D The distance between them is 1-p.
The pixel value of each adjacent pixel point is obtained, and then the pixel value of the pixel point to be interpolated (the O point shown in fig. 6) can be calculated according to the respective pixel values of the four adjacent pixel points, the distances between each adjacent pixel point and each intersection point, and an interpolation function, wherein the interpolation function is as follows:
P(x,y)=Pcol_1+(Pcol_2-Pcol_1)×q=(1-p)(1-q)P([x],[y])+p(1-q)P([x]+1,[y])+(1-p)qP([x],[y]+1)+pqP([x]+1,[y]+1)
In the interpolation function, P (x, y) represents the pixel value of the pixel point to be interpolated, P col_1 Representing the intersection P col_1 Pixel value, P of (2) col_2 Representing the intersection point P col_2 Q represents P B With the intersection point P row_2 The distance between them, 1-q represents the intersection point P row_2 And P D The distance between P represents P C With the intersection point P col_2 The distance between them, 1-P represents the intersection point P col_2 And P D Distance between P ([ x)],[y]) Representing P A P ([ x)]+1,[y]) Representing P B P ([ x)],[y]+1) represents P C P ([ x)]+1,[y]+1) represents P D Is a pixel value of (a).
It should be understood that for a gray image, the pixel value of its pixel point is the gray value. Therefore, the pixel value of the pixel point to be interpolated, which is obtained through interpolation function calculation, is the gray value of the pixel point to be interpolated. It should be noted that, in the embodiment of the present application, the gray value of the pixel to be interpolated, which is determined by a conventional interpolation algorithm, is referred to as an initial gray value.
In the implementation mode, each pixel point to be interpolated corresponding to the original image is determined through a traditional interpolation algorithm, and an initial gray value of each pixel point to be interpolated is determined, so that a foundation is provided for the subsequent rapid and accurate determination of the gradient direction and the edge direction of the pixel point to be interpolated and the gray value of the pixel point to be interpolated, and therefore the interpolation is carried out based on the gradient direction of the pixel point to be interpolated and the gray value of the pixel point to be interpolated, and then a high-quality image is obtained.
It should be noted that, in the interpolation process, the bilinear interpolation algorithm does not consider the edge characteristics of the image, and only interpolates based on the positions of the pixels. The obtained image has the phenomenon of stripes (or saw teeth), so that the details of the amplified image are unclear, the true contour information of the image can not be reflected, and the quality of the amplified image is seriously affected.
In view of this, the image processing method provided in the embodiment of the present application further adds S201 to S203 after the above step S102 to improve the image quality. Referring to fig. 7, fig. 7 is a flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 7, the image processing method includes S201 to S203.
S201: and determining the edge direction of each pixel point to be interpolated according to the gradient direction of each pixel point to be interpolated corresponding to the original image.
Illustratively, the original image corresponds to a plurality of pixels to be interpolated, which are determined through steps S101 and S102 described above.
And determining the gradient direction of each pixel point to be interpolated according to the gradient direction of the pixel point to be interpolated, and determining the edge direction of the pixel point to be interpolated. The edge direction of the pixel point to be interpolated is perpendicular to the gradient direction of the pixel point to be interpolated.
The gradient direction of the pixel point to be interpolated represents the changing direction of the gray value of the pixel point to be interpolated. For the pixel point to be interpolated, the gray value along the gradient direction changes the fastest and the gray value along the edge direction changes the slowest. Therefore, if interpolation is performed in the edge direction, the higher the continuity of the interpolated image is, the smaller the interpolation loss is, so that more image details are reserved, and the finally obtained target image is clear, lossless and high in quality.
S202: and determining the gray value of each pixel point to be interpolated.
Note that, the gray value of each pixel to be interpolated is also calculated in step S102, but the gray value of each pixel to be interpolated in step S102 is not the same as the gray value of each pixel to be interpolated determined in step S202.
In step S102, the gray value of each pixel to be interpolated is calculated by a conventional interpolation algorithm, and in step S102, the gray value of the pixel to be interpolated is an initial gray value, and the initial gray value is not used for final interpolation. The gray value of the pixel to be interpolated determined in step S202 is calculated based on the initial gray value of each pixel to be interpolated, the edge direction of each pixel to be interpolated, and the like, and is the gray value finally used for interpolation.
S203: and interpolating on the basis of the gray value of the pixel point to be interpolated in the edge direction of each pixel point to be interpolated to obtain a target image corresponding to the original image.
It should be understood that, in the edge direction of each pixel to be interpolated, when interpolation is performed based on the gray value of the pixel to be interpolated, different scaling instructions correspond to different interpolation modes.
For example, when the scaling instruction is an amplifying instruction, in the edge direction of each pixel to be interpolated, up-interpolation is performed based on the gray value of the pixel to be interpolated, so as to obtain a target image corresponding to the original image.
It can be colloquially understood that when the scaling instruction is an enlarging instruction, a plurality of pixel points to be interpolated are inserted in the original image. The method for inserting the plurality of pixel points to be interpolated is that the pixel points to be interpolated are inserted based on the gray value of the pixel points to be interpolated in the edge direction of each pixel point to be interpolated. And inserting a plurality of pixel points to be interpolated, and obtaining an image which is the target image.
For another example, when the scaling instruction is a shrinking instruction, in the edge direction of each pixel to be interpolated, performing downward interpolation based on the gray value of the pixel to be interpolated, so as to obtain a target image corresponding to the original image.
It can be understood that, when the scaling instruction is a shrinking instruction, the image is reconstructed according to the pixel points to be interpolated, and the composed image is the target image. And inserting the pixel points to be interpolated into a new image based on the gray values of the pixel points to be interpolated in the edge direction of each pixel point to be interpolated, so as to obtain a target image. That is, when the zoom command is a zoom command, the target image is composed of a plurality of pixels to be interpolated, each pixel to be interpolated is in the corresponding edge direction, and the gray value of each pixel to be interpolated is the gray value determined in step S202.
In the image processing method provided by the embodiment of the application, in the process of processing an image, the edge direction and the gray value of each pixel point to be interpolated are determined according to the gradient direction of each pixel point to be interpolated corresponding to the original image, and the gray value of the corresponding edge direction changes slowest when the gray value of the gradient direction changes fastest because the gradient direction and the edge direction are mutually perpendicular. Based on the method, interpolation is carried out on the gray value of each pixel point to be interpolated in the edge direction of each pixel point to be interpolated, so that the continuity of gray value change of the image after interpolation can be improved, the interpolation loss is smaller, and further more image details of the target image obtained after processing are reserved.
Because the edge characteristics of the image are considered, the image processing method provided by the application combines the edge characteristics of the image in the process of processing the image, so that the processed target image can reflect the real contour information of the image, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless. Compared with the related art, the method effectively avoids the phenomenon of streak (or jaggy) of the processed image.
Referring to fig. 8, fig. 8 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 8, the image processing method may include S301 to S304.
S301: and determining the gradient direction of each pixel point to be interpolated corresponding to the original image.
S302: and determining the edge direction of each pixel point to be interpolated according to the gradient direction of each pixel point to be interpolated corresponding to the original image.
S303: and determining the gray value of each pixel point to be interpolated.
S304: and interpolating on the basis of the gray value of the pixel point to be interpolated in the edge direction of each pixel point to be interpolated to obtain a target image corresponding to the original image.
It should be noted that, steps S302 to S304 in the present embodiment are consistent with steps S201 to S203 in the embodiment corresponding to fig. 7, and reference may be made to the description of steps S201 to S203 in the embodiment corresponding to fig. 7, and the description of steps S302 to S304 is omitted here. Step S301 is described in detail below.
And determining the gradient direction of each pixel point to be interpolated corresponding to the original image, so that the edge direction can be quickly and accurately determined according to the gradient direction. For the pixel to be interpolated, the gray value along the gradient direction changes the fastest, and the gray value along the edge direction perpendicular to the gradient direction changes the slowest. Therefore, interpolation is carried out based on the edge direction determined by the gradient direction, so that the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, thereby retaining more image details, effectively avoiding the phenomenon of stripes (or saw teeth) of the processed image, and ensuring that the finally obtained target image is clear and lossless and has high quality.
In one possible implementation manner, determining a gradient direction of each pixel point to be interpolated corresponding to the original image includes: and determining the gray gradient component of each pixel to be interpolated, and carrying out vector synthesis on the gray gradient components of each pixel to be interpolated to obtain the gray gradient of each pixel to be interpolated.
The gray gradient component indicates the gray gradient of the pixel to be interpolated in a first direction, wherein the first direction is the direction in which the adjacent pixel points of the pixel to be interpolated point to the pixel to be interpolated.
And mapping the position of the pixel to be interpolated into an original image aiming at each pixel to be interpolated, wherein the pixel adjacent to the pixel to be interpolated in the original image is the adjacent pixel of the pixel to be interpolated.
For example, in different implementations, the number of adjacent pixels of each pixel to be interpolated is not limited, and may be set and adjusted by a user according to actual situations. And the number of gray gradient components of each pixel to be interpolated corresponds to the number of adjacent pixel points of each pixel to be interpolated.
For example, when the number of adjacent pixels of the pixel to be interpolated is 4, the number of gray gradient components of the pixel to be interpolated is 4. For another example, when the number of adjacent pixels of the pixel to be interpolated is 6, the number of gray gradient components of the pixel to be interpolated is 6. For another example, when the number of adjacent pixels of the pixel to be interpolated is 8, the number of gray gradient components of the pixel to be interpolated is 8. This is merely an exemplary illustration and is not limiting in this regard.
The connection line between the pixel point to be interpolated and each adjacent pixel point is illustratively respectively made, and the opposite direction of the connection line is the first direction corresponding to the adjacent pixel point. The gray gradient of the pixel point to be interpolated in the first direction is the gray gradient component of the pixel point to be interpolated in the first direction. For each pixel point to be interpolated, a plurality of gray gradient components corresponding to the pixel point to be interpolated are obtained in the mode, and vector synthesis is carried out on the gray gradient components to obtain the gray gradient of the pixel point to be interpolated.
In the implementation mode, the gray gradient of the pixel point to be interpolated is obtained by vector synthesis through the plurality of gray gradient components of the pixel point to be interpolated, and because the adjacent pixel points are distributed in different directions, the gray change characteristics of the pixel point to be interpolated in different directions can be considered through the mode of vector synthesis through the plurality of gray gradient components, the determined gradient direction of the pixel point to be interpolated is more accurate, and the edge direction of the pixel point to be interpolated can be accurately determined based on the gradient direction.
For example, the gray gradient component of each pixel to be interpolated may be determined by determining, for each pixel to be interpolated, a plurality of adjacent pixels adjacent to the pixel to be interpolated in the original image, and a gray value of each adjacent pixel; and calculating a plurality of gray gradient components of the pixel point to be interpolated based on the initial gray value of the pixel point to be interpolated and the gray values of a plurality of adjacent pixel points.
In the embodiment of the present application, the number of adjacent pixel points is 4 as an example.
For ease of understanding, please refer to fig. 9, fig. 9 is a schematic diagram of a gray gradient component according to an embodiment of the present application.
Assuming that the O point is any pixel to be interpolated corresponding to the determined original image, as shown in fig. 8, the pixel to be interpolated is mapped to four adjacent (or nearest) pixels adjacent to the pixel to be interpolated in the original image, which are A, B, C, D respectively. The coordinates of the four adjacent pixel points are A (x, y), B (x+1, y), C (x, y+1) and D (x+1, y+1) respectively. It should be appreciated that A, B, C, D four adjacent pixels are pixels in the original image. A. B, C, D four adjacent pixels enclose a square, and the pixel to be interpolated falls into the square.
As shown in fig. 9, connection lines between the pixel point to be interpolated (O-point) and each adjacent pixel point are respectively made, and the opposite direction of the connection lines is the first direction corresponding to the adjacent pixel point. The gray gradient of the pixel point to be interpolated in the first direction is the gray gradient component of the pixel point to be interpolated in the first direction.
As shown in fig. 9, the pixel points to be interpolated (O-points) and A, B, C, D are respectively madeConnection line of four adjacent pixel points G A Representing the gray gradient component of the pixel point to be interpolated (O point) in the AO direction, G B Representing gray gradient component of pixel point (O point) to be interpolated in BO direction, G C Representing the gray gradient component of the pixel point to be interpolated (O point) in the CO direction, G D The gray gradient component of the pixel point to be interpolated (O point) in the DO direction is represented.
And calculating a plurality of gray gradient components of the pixel to be interpolated based on the initial gray value of the pixel to be interpolated, the gray value of each adjacent pixel and a first preset formula. Since the adjacent pixels are pixels in the original image, the gray value of each adjacent pixel can be directly obtained, and the initial gray value of the pixel to be interpolated can be determined in step S102.
The first preset formula is as follows:
in the above formula (1), i represents each adjacent pixel, i has a value of A, B, C, D,representing gray gradient components of pixel points (O points) to be interpolated in a first direction corresponding to different adjacent pixel points, ++>Representing deriving gray value and distance, +.>Representing a unit length vector, the direction of which points from adjacent pixel points to be interpolated (O points), P o Representing the gray value, P, of the pixel (O-point) to be interpolated i Representing the gray value, r, of each adjacent pixel i Representing the distance between the i-th neighboring pixel point and the pixel point to be interpolated (O-point).
For example, substituting i=a into the above equation (1), the gray gradient component of the pixel to be interpolated (O point) in the AO direction can be obtained as:
For another example, substituting i=b into the above equation (1) can obtain the gray gradient component of the pixel to be interpolated (O point) in the BO direction as follows:
for another example, substituting i=c into the above equation (1) can obtain the gray gradient component of the pixel point to be interpolated (O point) in the CO direction as follows:
for another example, substituting i=d into the above equation (1) can obtain the gray gradient component of the pixel to be interpolated (O point) in the DO direction as follows:
in the implementation mode, the gray gradient component of the pixel point to be interpolated is determined according to a plurality of adjacent pixel points adjacent to the pixel point to be interpolated and the gray value of each adjacent pixel point, and the gray change characteristics of the adjacent pixel points of the pixel point to be interpolated in different directions are considered in the process, so that the determined gray gradient component is more accurate, and the gradient direction of the pixel point to be interpolated is accurately synthesized according to the gray gradient component.
After a plurality of gray gradient components of the pixel to be interpolated are calculated, vector synthesis is carried out on the gray gradient components of the pixel to be interpolated, so that the gray gradient of the pixel to be interpolated is obtained, and the gray gradient comprises a gradient direction.
For ease of understanding, please refer to fig. 10, fig. 10 is a schematic diagram of vector synthesis according to an embodiment of the present application.
For a pair ofVector synthesis is performed to obtain a gray gradient G of the pixel point to be interpolated (O point) as shown in fig. 10.
As shown in fig. 10, a coordinate system is established with a pixel point to be interpolated (O point) as a coordinate origin, and an angle formed by an x-axis and a line in which a gradient direction of the pixel point to be interpolated (O point) is located is α, where α is a gradient direction angle of the pixel point to be interpolated (O point).
The specific vector synthesis process may be implemented by a second preset formula, which is as follows:
in the formula (2), i has a value of A, B, C, D, G xi Represents G i Component in x-axis, G yi Represents G i The component on the y-axis, α, represents the gradient direction angle of the pixel point to be interpolated (O-point).
In the implementation mode, the gradient direction of the pixel point to be interpolated is obtained after vector synthesis is carried out according to a plurality of gray gradient components of the pixel point to be interpolated, and gray gradient components of the pixel point to be interpolated in different directions are considered in the process, so that the gradient direction obtained by vector synthesis according to the plurality of gray gradient components is more accurate.
In another possible implementation manner, determining a gradient direction of each pixel point to be interpolated corresponding to the original image includes: determining a weight value between each pixel point to be interpolated and each adjacent pixel point, determining a gray gradient component of each pixel point to be interpolated according to each weight value, and carrying out vector synthesis on a plurality of gray gradient components of each pixel point to be interpolated to obtain the gray gradient of each pixel point to be interpolated.
Illustratively, the weight value between each pixel to be interpolated and the respective adjacent pixel is determined by the distance between each pixel to be interpolated and the respective adjacent pixel. The closer the distance between the pixel point to be interpolated and the adjacent pixel point is, the larger the weight value corresponding to the adjacent pixel point is, and the farther the distance between the pixel point to be interpolated and the adjacent pixel point is, the smaller the weight value corresponding to the adjacent pixel point is.
In this embodiment of the present application, different distance values may be preset to correspond to different weight values. For example, when the distance value between the pixel to be interpolated and the adjacent pixel is within the first distance range, the weight value corresponding to the adjacent pixel is set to be W 1 The method comprises the steps of carrying out a first treatment on the surface of the When the distance value between the pixel point to be interpolated and the adjacent pixel point is within the second distance range, setting the weight value corresponding to the adjacent pixel point as W 2 . For another example, when the distance value between the pixel point to be interpolated and the adjacent pixel point is smaller than or equal to the preset distance threshold value, the weight value corresponding to the adjacent pixel point is set to be W 1 The method comprises the steps of carrying out a first treatment on the surface of the When the distance value between the pixel point to be interpolated and the adjacent pixel point is greater than the preset distance threshold value, setting the weight value corresponding to the adjacent pixel point as W 2 . This is merely illustrative and is not limiting.
For each pixel to be interpolated, the coordinates of the pixel to be interpolated may be obtained when the pixel to be interpolated is determined in S102. Meanwhile, as the adjacent pixel points are the pixel points in the original image, the coordinates of each adjacent pixel point can be directly obtained. According to the coordinates of the pixel points to be interpolated and the coordinates of each adjacent pixel point, the distance value between the pixel points to be interpolated and each adjacent pixel point can be calculated. And determining the weight value corresponding to each adjacent pixel point by combining the preset corresponding relation between different distance values and different weight values.
And calculating a plurality of gray gradient components of the pixel points to be interpolated based on the initial gray value of the pixel points to be interpolated, the weight value between each pixel point to be interpolated and each adjacent pixel point, the gray value of each adjacent pixel point and a second preset formula. Since the adjacent pixels are pixels in the original image, the gray value of each adjacent pixel can be directly obtained, and the initial gray value of the pixel to be interpolated can be determined in step S102.
The second preset formula is as follows:
In the above formula (3), i represents each adjacent pixel, i has a value of A, B, C, D,representing gray gradient components of pixel points (O points) to be interpolated in a first direction corresponding to different adjacent pixel points, ++>Representing deriving gray value and distance, +.>Representing a unit length vector, the direction of which points from adjacent pixel points to be interpolated (O points), P o Representing the gray value, P, of the pixel (O-point) to be interpolated i Representing the gray value, W, of each adjacent pixel i Representing the weight value, r, corresponding to each adjacent pixel point i Representing the distance between the i-th neighboring pixel point and the pixel point to be interpolated (O-point).
For example, substituting i=a into the above equation (3), the gray gradient component of the pixel to be interpolated (O point) in the AO direction can be obtained as:
for another example, substituting i=b into the above equation (3) can obtain the gray gradient component of the pixel to be interpolated (O point) in the BO direction as follows:
for another example, substituting i=c into the above equation (3) can obtain the gray gradient component of the pixel point to be interpolated (O point) in the CO direction as follows:
for another example, substituting i=d into the above equation (3) can obtain the gray gradient component of the pixel to be interpolated (O point) in the DO direction as follows:
after a plurality of gray gradient components of the pixel to be interpolated are calculated, vector synthesis is carried out on the gray gradient components of the pixel to be interpolated, and the gray gradient of the pixel to be interpolated is obtained. The specific vector synthesis process may be implemented by the second preset formula, which is not described herein.
In the implementation manner, when the gray gradient of the pixel point to be interpolated is determined, not only the gray change characteristics of the adjacent pixel points of the pixel point to be interpolated in different directions are considered, but also the gray values of the adjacent pixel points are adjusted based on the weight values corresponding to the different adjacent pixel points, so that the determined gray gradient component is more accurate, and the gradient direction synthesized based on the gray gradient component is also more accurate.
Referring to fig. 11, fig. 11 is a flowchart illustrating a process of calculating a gray value of a pixel to be interpolated according to an embodiment of the present application. As shown in fig. 11, S202 described above may include S2021 and S2022.
S2021: and determining the edge direction angle of each pixel point to be interpolated.
The edge direction angle is an included angle formed by an x-axis and a straight line where the edge direction of the pixel to be interpolated is located, and the x-axis direction is determined by taking the x-axis direction as a standard direction.
In one possible implementation manner, the edge direction angle may be used to determine the slope of the line in which the edge direction is located, so any included angle formed by the x-axis and the line in which the edge direction of the pixel to be interpolated is located, so long as the slope of the line in which the edge direction is located can be determined, the slope may be used as the edge direction angle of the pixel to be interpolated, and the specific case is that the slope is not limited.
In another possible implementation manner, the edge direction angle of each pixel point to be interpolated may be determined by determining a gradient direction angle of each pixel point to be interpolated, and determining the edge direction angle of each pixel point to be interpolated according to the gradient direction angle of each pixel point to be interpolated.
The gradient direction angle is an included angle between the pixel point to be interpolated and the gradient direction in the horizontal direction.
For ease of understanding, referring to fig. 12, fig. 12 is a schematic view of an edge direction angle according to an embodiment of the present application.
As shown in fig. 12, a line perpendicular to the gradient direction of the pixel to be interpolated (O-point) is made by passing through the pixel to be interpolated (O-point), and the direction in which the line is located is the edge direction of the pixel to be interpolated (O-point). The straight line intersects the square enclosed by the A, B, C, D four adjacent pixels at point M, N.
As shown in fig. 12, an angle formed by the x-axis and a straight line in which the gradient direction of the pixel to be interpolated (O-point) is located is α, where α is the gradient direction angle of the pixel to be interpolated (O-point).
As shown in fig. 12, the x-axis forms a reflex angle θ with a straight line in which the edge direction of the pixel to be interpolated (O-point) is located, the θ being the edge direction angle of the pixel to be interpolated (O-point).
It can be understood that the line in which the gradient direction of the pixel point to be interpolated (O point) is located is perpendicular to the line in which the edge direction of the pixel point to be interpolated (O point) is located, and the line OM is perpendicular toThe included angle formed by the straight line of the gradient direction of the pixel point (O point) to be interpolated isThen the edge direction angle θ can be determined by the gradient direction angle and +.>And (5) determining. For example, a->
In the implementation manner, the edge direction angle of each pixel point to be interpolated is determined according to the gradient direction angle of each pixel point to be interpolated, so that the gray value of the pixel point to be interpolated can be accurately calculated according to the edge direction angle. Therefore, when interpolation is carried out based on the gray value, the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, and more image details are reserved, so that the finally obtained target image is clear, lossless and high in quality.
S2022: and calculating the gray value of each pixel point to be interpolated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated.
It should be understood that the adjacent pixel points are pixel points in the original image, and thus the gray value of each adjacent pixel point can be directly obtained.
For each pixel to be interpolated, an intersection point generated by intersecting a square (a square surrounded by four adjacent pixels of the pixel to be interpolated) with a straight line in which an edge direction of the pixel to be interpolated is located is determined. And determining the distance between the pixel point to be interpolated and the intersection point. The gray value of the intersection is calculated based on the gray value of each adjacent pixel. And calculating the gray value of the pixel point to be interpolated according to the distance between the pixel point to be interpolated and the intersection point and the gray value of the intersection point.
In the implementation manner, the gray value of each pixel point to be interpolated is accurately calculated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated. Therefore, when interpolation is carried out based on the gray value, the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, and more image details are reserved, so that the finally obtained target image is clear, lossless and high in quality.
In the embodiment of the present application, the number of adjacent pixel points is 4 as an example. Accordingly, the plurality of neighboring pixels may include a first neighboring pixel, a second neighboring pixel, a third neighboring pixel, and a fourth neighboring pixel.
As an example of the present application, the specific implementation of S2022 described above may include S20221 to S20226:
s20221: and determining a first intersection point and a second intersection point according to the edge direction of the pixel point to be interpolated and a plurality of adjacent pixel points for each pixel point to be interpolated.
The first intersection point, the second intersection point and the pixel point to be interpolated are collinear, or the first intersection point, the second intersection point and the pixel point to be interpolated are on a straight line where the edge direction is located.
In one possible implementation manner, for each pixel point to be interpolated, a first intersection point is determined according to an edge direction of the pixel point to be interpolated, the first adjacent pixel point and the third adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, please refer to fig. 13, fig. 13 is a schematic diagram of an intersection point provided in an embodiment of the present application.
As shown in fig. 13, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line of the edge direction of the pixel point (O point) to be interpolated is intersected with the straight lines of the first adjacent pixel point and the third adjacent pixel point to form a first intersection point. I.e. MN intersects the AC, forming a first intersection point M.
And determining a second intersection point according to the edge direction of the pixel point to be interpolated, the second adjacent pixel point and the fourth adjacent pixel point. The second intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
As shown in fig. 13, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the second adjacent pixel point and the fourth adjacent pixel point are located, forming a second intersection point. I.e. MN intersects BD, forming a second intersection point N.
In another possible implementation manner, for each pixel point to be interpolated, a first intersection point is determined according to an edge direction of the pixel point to be interpolated, a first adjacent pixel point and a second adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, referring to fig. 14, fig. 14 is another schematic view of intersection point provided in the embodiment of the present application.
As shown in fig. 14, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line of the edge direction of the pixel point (O point) to be interpolated is intersected with the straight lines of the first adjacent pixel point and the second adjacent pixel point to form a first intersection point. I.e. MN intersects AB, forming a first intersection point M.
As shown in fig. 14, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the second adjacent pixel point and the fourth adjacent pixel point are located, forming a second intersection point. I.e. MN intersects BD, forming a second intersection point N.
In yet another possible implementation manner, for each pixel point to be interpolated, the first intersection point is determined according to the edge direction of the pixel point to be interpolated, the first adjacent pixel point and the third adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, referring to fig. 15, fig. 15 is a schematic view of another intersection point provided in the embodiment of the present application.
As shown in fig. 15, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line of the edge direction of the pixel point (O point) to be interpolated is intersected with the straight lines of the first adjacent pixel point and the third adjacent pixel point to form a first intersection point. I.e. MN intersects the AC, forming a first intersection point M.
As shown in fig. 15, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the third adjacent pixel point and the fourth adjacent pixel point are located, so as to form a second intersection point. I.e. MN intersects the CD, forming a second intersection point N.
In another possible implementation manner, for each pixel point to be interpolated, the first intersection point is determined according to the edge direction of the pixel point to be interpolated, the first adjacent pixel point and the third adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, referring to fig. 16, fig. 16 is a schematic view of another intersection point provided in the embodiment of the present application.
As shown in fig. 16, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line of the edge direction of the pixel point (O point) to be interpolated is intersected with the straight lines of the first adjacent pixel point and the third adjacent pixel point to form a first intersection point. I.e. MN intersects the AC, forming a first intersection point M.
As shown in fig. 16, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the first adjacent pixel point and the second adjacent pixel point are located, forming a second intersection point. I.e. MN intersects AB, forming a second intersection point N.
In yet another possible implementation manner, for each pixel point to be interpolated, the first intersection point is determined according to the edge direction of the pixel point to be interpolated, the third adjacent pixel point and the fourth adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, referring to fig. 17, fig. 17 is a schematic view of another intersection point provided in the embodiment of the present application.
As shown in fig. 17, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line where the edge direction of the pixel point (O point) to be interpolated is intersected with the straight line where the third adjacent pixel point and the fourth adjacent pixel point are located, so that a first intersection point is formed. I.e. MN intersects the CD, forming a first intersection point M.
As shown in fig. 17, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the second adjacent pixel point and the fourth adjacent pixel point are located, forming a second intersection point. I.e. MN intersects BD, forming a second intersection point N.
In another possible implementation manner, for each pixel point to be interpolated, a first intersection point is determined according to an edge direction of the pixel point to be interpolated, a first adjacent pixel point and a second adjacent pixel point. The first intersection point and the pixel point to be interpolated are positioned on a straight line where the edge direction is positioned.
For ease of understanding, referring to fig. 18, fig. 18 is a schematic view of another intersection point provided in the embodiment of the present application.
As shown in fig. 18, a denotes a first adjacent pixel point, B denotes a second adjacent pixel point, C denotes a third adjacent pixel point, and D denotes a fourth adjacent pixel point. The straight line of the edge direction of the pixel point (O point) to be interpolated is intersected with the straight lines of the first adjacent pixel point and the second adjacent pixel point to form a first intersection point. I.e. MN intersects AB, forming a first intersection point M.
As shown in fig. 18, a straight line in which the edge direction of the pixel point to be interpolated (O point) is located intersects a straight line in which the third adjacent pixel point and the fourth adjacent pixel point are located, forming a second intersection point. I.e. MN intersects the CD, forming a second intersection point N.
It should be understood that the manner in which the intersection points are formed in different implementation scenarios is different, and the foregoing is merely illustrative, and the present invention is not limited to this.
S20222: and determining a first distance and a second distance according to the edge direction angle of the pixel point to be interpolated.
The first distance represents the distance between the pixel point to be interpolated and the first intersection point, and the second distance represents the distance between the pixel point to be interpolated and the second intersection point.
As shown in fig. 13, it is assumed that a coordinate system is established with a pixel to be interpolated (O-point) as a coordinate origin, a horizontal distance between the pixel to be interpolated (O-point) and an adjacent pixel at the upper left corner (i.e., a first adjacent pixel) is p, and a vertical distance between the pixel to be interpolated (O-point) and an adjacent pixel at the upper left corner (i.e., a first adjacent pixel) is q.
Assuming that the vertical distance between the first adjacent pixel point and the third adjacent pixel point is 1, the horizontal distance between the first adjacent pixel point and the second adjacent pixel point is 1, the vertical distance between the second adjacent pixel point and the fourth adjacent pixel point is 1, and the horizontal distance between the third adjacent pixel point and the fourth adjacent pixel point is 1.
Based on the respective distances set as described above, the coordinates of the first adjacent pixel point may be determined as a (-p, -q), the coordinates of the second adjacent pixel point may be determined as B (1-p, -q), the coordinates of the third adjacent pixel point may be determined as C (-p, 1-q), and the coordinates of the third adjacent pixel point may be determined as D (1-p, 1-q).
Assuming that the slope of the straight line in which the edge direction of the pixel point to be interpolated (O point) is located is k, the value of k is calculated by the following expression (3).
In the above expression (4), k represents the slope of a straight line in which the edge direction of the pixel (O point) to be interpolated is located, and θ is the edge direction angle of the pixel (O point) to be interpolated.
The coordinates of the first intersection point M are (-p, -p) tan theta, and the coordinates of the second intersection point N are (1-p, (1-p) tan theta) according to mathematical knowledge.
As shown in fig. 13, a first distance between a pixel to be interpolated (O point) and a first intersection point M is denoted by r1, and a second distance between the pixel to be interpolated (O point) and a second intersection point N is denoted by r 2.
The values of r1 and r2 are calculated by the following expression (5), specifically as follows:
in the above equation (5), r1 represents a first distance between the pixel to be interpolated (O point) and the first intersection point M, and r2 represents a second distance between the pixel to be interpolated (O point) and the second intersection point N.
S20223: the gray value of the first intersection point and the gray value of the second intersection point are calculated based on the gray value of each adjacent pixel point.
For example, the gray value of the first intersection is determined according to the gray value of the first adjacent pixel and the gray value of the third adjacent pixel.
Illustratively, a gray value of the first intersection point may be obtained by linearly interpolating between the gray value of the first adjacent pixel point and the gray value of the third adjacent pixel point.
For example, the gray value of the first intersection point may be calculated by a first linear interpolation polynomial as follows:
P M =P A +(P C -P A )q1, (6)
in the above formula (6), P M A gray value, P, representing the first intersection point M A Representing the gray value, P, of the first adjacent pixel point A C The gray value representing the third neighboring pixel point C, q1 represents the distance between the first intersection point M and the first neighboring pixel point a.
Wherein q1 can be calculated by the coordinates of the first intersection point M and the coordinates of the first adjacent pixel point a. For example, q1=q-pptanθ.
And determining the gray value of the second intersection point according to the gray value of the second adjacent pixel point and the gray value of the fourth adjacent pixel point.
Illustratively, a gray value of the second intersection point may be obtained by linearly interpolating between the gray value of the second adjacent pixel point and the gray value of the fourth adjacent pixel point.
For example, the gray value of the second intersection point may be calculated by a second linear interpolation polynomial, which is as follows:
P N =P B +(P D -P B )q2, (7)
In the above formula (7), P N A gray value, P, representing the second intersection point N B Representing the gray value, P, of a second adjacent pixel B D The gray value of the fourth neighboring pixel point D is represented, and q2 represents the distance between the second intersection point N and the second neighboring pixel point B.
Wherein q2 can be calculated by the coordinates of the second intersection point N and the coordinates of the second adjacent pixel point B. For example, q2=q+ (1-p) tan θ.
S20224: and calculating the gray value of the pixel point to be interpolated according to the first distance, the second distance, the gray value of the first intersection point and the gray value of the second intersection point.
Illustratively, linear interpolation is performed between the gray value of the first intersection point and the gray value of the second intersection point, so that the gray value of the pixel point to be interpolated can be obtained.
For example, the gray value of the pixel to be interpolated may be calculated by a third linear interpolation polynomial, which is as follows:
in the above formula (8), P O Represents the gray value of the pixel to be interpolated (O point), r2 represents the second distance between the pixel to be interpolated (O point) and the second intersection point N, P M A gray value representing a first intersection point M, r1 represents a first distance between a pixel point (O point) to be interpolated and the first intersection point M, P N The gray value representing the second intersection point N.
And interpolating on the basis of the gray value of the pixel point to be interpolated obtained by calculation in the edge direction of each pixel point to be interpolated to obtain a target image.
In the implementation manner, the gray value of each pixel point to be interpolated is accurately calculated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated. Therefore, when interpolation is carried out based on the gray value, the higher the continuity of the image after interpolation is, the smaller the interpolation loss is, and more image details are reserved, so that the finally obtained target image is clear, lossless and high in quality.
It should be noted that, the initial gray value of the pixel to be interpolated (i.e., determined in step S102) is determined by a conventional interpolation algorithm, and in order to obtain more accurate gradient directions, edge directions, and gray values of the pixel to be interpolated, the gray values of the pixel to be interpolated may be continuously calculated iteratively until the edge direction angle θ converges. And then interpolating according to the gray value of the pixel point to be interpolated, which is obtained when the edge direction angle theta converges, so as to obtain a target image. The iteration is continuous, so that the determined gradient direction, the edge direction and the gray value of the pixel point to be interpolated are more accurate, and further, the detail reserved by the image after interpolation can be more accurate when interpolation is carried out based on the edge direction and the gray value after the iteration, and the quality of the target image is further improved.
In view of this, the image processing method provided in the embodiment of the present application may further include S401 to S407. It should be noted that S401 to S407 may be performed after the above step S202.
Referring to fig. 19, fig. 19 is a flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 19, the image processing method includes S401 to S407.
S401: and determining the gray value of the pixel to be interpolated as the initial gray value of the pixel to be interpolated for each pixel to be interpolated.
Taking a round of execution process as an example, the gray value of the pixel to be interpolated determined in the step S202 is taken as the initial gray value of the pixel to be interpolated. And executing the process of calculating the gray value of the pixel to be interpolated again based on the initial gray value of the current pixel to be interpolated, thereby calculating the gray value of the new pixel to be interpolated.
S402: a plurality of gray gradient components of the pixel to be interpolated are calculated based on the initial gray value and gray values of a plurality of neighboring pixels.
And calculating a plurality of gray gradient components of the pixel point to be interpolated based on the redetermined initial gray value of the pixel point to be interpolated and the gray values of a plurality of adjacent pixel points. For the specific calculation process, reference is made to the description in step S301, and this is not repeated here.
S403: and determining the gray gradient of the pixel point to be interpolated based on the gray gradient components.
The vector synthesis is performed on a plurality of gray gradient components of the pixel to be interpolated to obtain gray gradients of the pixel to be interpolated. For specific implementation, reference may be made to the description in step S301, which is not repeated here.
S404: and determining the edge direction of the pixel point to be interpolated according to the gray gradient.
Illustratively, the gradation gradient determined in step S403 includes a gradient direction, which is an updated gradient direction. And determining a new edge direction of the pixel point to be interpolated based on the updated gradient direction. The new edge direction is perpendicular to the updated gradient direction.
S405: and determining the edge direction angle of the pixel point to be interpolated.
Illustratively, a gradient direction angle of the pixel to be interpolated is determined, and an edge direction angle of the pixel to be interpolated is determined according to the gradient direction angle of the pixel to be interpolated. For specific implementation, reference is made to the description in step S2021, which is not repeated here.
S406: and calculating the gray value of the pixel to be interpolated according to the gray value of each adjacent pixel, the edge direction of the pixel to be interpolated and the edge direction angle of the pixel to be interpolated.
Specifically, the implementation process may refer to the description in the above step S2022, and it should be noted that, unlike in step S2022, the edge direction of the pixel to be interpolated in step S406 and the edge direction angle of the pixel to be interpolated are determined according to the redetermined initial gray value of the pixel to be interpolated, that is, the edge direction of the pixel to be interpolated and the edge direction angle of the pixel to be interpolated in step S406 are updated.
S407: repeating the steps, when the edge direction angle converges, interpolating on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated to obtain a target image, including: and interpolating according to the gray value of the pixel point to be interpolated, which is obtained when the edge direction angle converges, in the edge direction of each pixel point to be interpolated, so as to obtain a target image.
Illustratively, edge direction angle convergence refers to the edge direction angle tending toward some preset angle.
In one possible implementation manner, if the edge direction angle does not converge, the steps S401 to S406 are repeatedly performed, that is, the gray value of the pixel to be interpolated calculated in the step S406 is continuously determined as the new initial gray value of the pixel to be interpolated. And updating the gradient direction, the edge direction, the gray value of the pixel to be interpolated and the like based on the new initial gray value of the pixel to be interpolated which is determined again until the edge direction angle converges.
In another possible implementation, the edge direction angle converges. For ease of understanding, referring to fig. 20, fig. 20 is a schematic view of edge direction angle convergence according to an embodiment of the present application.
As shown in fig. 20, the horizontal axis represents the number of iterations, the vertical axis represents the angle of the edge direction angle, the dotted line represents the preset angle corresponding to the convergence of the edge direction angle, and the curve represents the actual angle of the edge direction angle changing at different stages.
When the edge direction angle converges, the gray value of the pixel point to be interpolated obtained when the edge direction angle converges is the most final gray value, namely the gray value used for interpolation finally. And interpolating according to the gray value of the pixel point to be interpolated, which is obtained when the edge direction angle converges, in the edge direction of each pixel point to be interpolated, so as to obtain a target image.
For example, when the scaling instruction is an amplifying instruction, in the edge direction of each pixel to be interpolated, up-interpolation is performed according to the gray value of the pixel to be interpolated obtained when the edge direction angle converges, so as to obtain the target image.
For another example, when the scaling instruction is a scaling instruction, in the edge direction of each pixel to be interpolated, the target image is obtained by performing the down interpolation according to the gray value of the pixel to be interpolated obtained when the edge direction angle converges.
In the implementation manner, the determined gradient direction, the edge direction and the gray value of the pixel point to be interpolated are more accurate due to continuous iteration, so that the detail reserved by the image after interpolation can be more accurate when the interpolation is carried out based on the edge direction and the gray value after the iteration, and the quality of the target image is further improved.
For ease of understanding, please refer to fig. 21, fig. 21 is a schematic view of a scene of an enlarged image using the image processing method provided in the present application.
As shown in fig. 21, assuming that a certain image in the thumbnail is displayed as an original image, the image 301 shown in fig. 21 (a) is displayed, and when the user wants to zoom in on the partial image 302 in the image 301, the user can touch the area where the partial image 302 is located at the same time with two fingers and slide outward, that is, the partial image 302 can be zoomed in. As shown in fig. 21 (b), the image 303 is an enlarged image of the partial image 302.
Because the edge characteristics of the image are considered, the image processing method provided by the application combines the edge characteristics of the image in the process of processing the image, so that the processed target image can reflect the real contour information of the image, the quality of the processed image is improved, and the processed image is ensured to be clear and lossless. Compared with the related art, the method effectively avoids the phenomenon of streak (or jaggy) of the processed image.
Examples of the image processing method provided in the embodiments of the present application are described above in detail. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The embodiment of the present application may divide the functional modules of the electronic device according to the above method example, for example, each function may be divided into each functional module, for example, the first determining unit, the second determining unit, the interpolation unit, and the like, or two or more functions may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to execute the above-described image processing method, and therefore the same effects as those of the above-described implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a WiFi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 3.
The present application also provides a computer-readable storage medium in which a computer program is stored, which when executed by a processor, causes the processor to execute the image processing method of any one of the above embodiments.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the image processing method in the above-mentioned embodiments.
The embodiment of the application also provides a chip. Referring to fig. 22, fig. 22 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip shown in fig. 22 may be a general-purpose processor or a special-purpose processor. The chip includes a processor 410. Wherein the processor 410 is configured to perform the image processing method of any of the above embodiments.
Optionally, the chip further comprises a transceiver 420, and the transceiver 420 is configured to receive control of the processor and is configured to support the communication device to perform the foregoing technical solution.
Optionally, the chip shown in fig. 22 may further include: a storage medium 430.
It should be noted that the chip shown in fig. 22 may be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic devices (programmable logic device, PLD), controllers, state machines, gate logic, discrete hardware components, any other suitable circuit or combination of circuits capable of performing the various functions described throughout this application.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An image processing method, comprising:
determining the edge direction of each pixel point to be interpolated according to the gradient direction of each pixel point to be interpolated corresponding to the original image, wherein the edge direction is perpendicular to the gradient direction;
determining the gray value of each pixel point to be interpolated;
and interpolating on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated to obtain a target image corresponding to the original image.
2. The image processing method according to claim 1, wherein the method further comprises:
and determining the gradient direction of each pixel point to be interpolated corresponding to the original image.
3. The image processing method according to claim 2, wherein the determining a gradient direction of each pixel to be interpolated corresponding to the original image includes:
determining a gray gradient component of each pixel point to be interpolated, wherein the gray gradient component indicates the gray gradient of the pixel point to be interpolated in a first direction, and the first direction is the direction in which the adjacent pixel points of the pixel point to be interpolated point to the pixel point to be interpolated;
vector synthesis is carried out on a plurality of gray gradient components of each pixel point to be interpolated, so that gray gradient of each pixel point to be interpolated is obtained, and the gray gradient comprises gradient directions.
4. The image processing method according to claim 3, wherein the determining the gray gradient component of each pixel to be interpolated includes:
determining a plurality of adjacent pixel points adjacent to the pixel points to be interpolated in the original image and gray values of each adjacent pixel point aiming at each pixel point to be interpolated;
and calculating a plurality of gray gradient components of the pixel point to be interpolated based on the initial gray value of the pixel point to be interpolated and the gray values of a plurality of adjacent pixel points.
5. The image processing method according to claim 4, wherein the determining the gray value of each pixel to be interpolated includes:
determining an edge direction angle of each pixel point to be interpolated;
and calculating the gray value of each pixel point to be interpolated according to the gray value of each adjacent pixel point, the edge direction of each pixel point to be interpolated and the edge direction angle of each pixel point to be interpolated.
6. The image processing method according to claim 5, wherein the calculating the gray value of each pixel to be interpolated according to the gray value of each adjacent pixel, the edge direction of each pixel to be interpolated, and the edge direction angle of each pixel to be interpolated comprises:
For each pixel point to be interpolated, determining a first intersection point and a second intersection point according to the edge direction of the pixel point to be interpolated and the plurality of adjacent pixel points, wherein the first intersection point, the second intersection point and the pixel point to be interpolated are collinear;
determining a first distance and a second distance according to the edge direction angle of the pixel point to be interpolated, wherein the first distance represents the distance between the pixel point to be interpolated and the first intersection point, and the second distance represents the distance between the pixel point to be interpolated and the second intersection point;
calculating the gray values of the first intersection point and the second intersection point based on the gray value of each adjacent pixel point;
and calculating the gray value of the pixel point to be interpolated according to the first distance, the second distance, the gray value of the first intersection point and the gray value of the second intersection point.
7. The image processing method according to claim 5, wherein the determining the edge direction angle of each pixel to be interpolated includes:
determining a gradient direction angle of each pixel point to be interpolated, wherein the gradient direction angle is an included angle between the pixel point to be interpolated and the gradient direction in the horizontal direction;
And determining the edge direction angle of each pixel point to be interpolated according to the gradient direction angle of each pixel point to be interpolated.
8. The image processing method according to any one of claims 1 to 7, characterized in that the method further comprises:
for each pixel point to be interpolated, determining the gray value of the pixel point to be interpolated as the initial gray value of the pixel point to be interpolated;
calculating a plurality of gray gradient components of the pixel point to be interpolated based on the initial gray value and gray values of a plurality of adjacent pixel points;
determining the gray gradient of the pixel point to be interpolated based on the gray gradient components;
determining the edge direction of the pixel point to be interpolated according to the gray gradient;
determining an edge direction angle of the pixel point to be interpolated;
calculating the gray value of each pixel point to be interpolated according to the gray value of each adjacent pixel point, the edge direction of the pixel point to be interpolated and the edge direction angle of the pixel point to be interpolated;
repeating the steps, when the edge direction angle converges, interpolating on the edge direction of each pixel point to be interpolated based on the gray value of the pixel point to be interpolated to obtain a target image corresponding to the original image, wherein the steps comprise:
And interpolating according to the gray value of the pixel point to be interpolated, which is obtained when the edge direction angle converges, in the edge direction of each pixel point to be interpolated, so as to obtain the target image.
9. The image processing method according to any one of claims 1 to 8, further comprising, before determining an edge direction of each pixel to be interpolated from a gradient direction of each pixel to be interpolated corresponding to the original image:
detecting an operation for zooming;
determining a scale of the original image in response to the operation;
and determining each pixel point to be interpolated corresponding to the original image according to the scaling and the interpolation algorithm.
10. An electronic device, comprising: one or more processors; one or more memories; the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-9.
11. A chip, comprising: a processor for calling and running a computer program from a memory, causing an electronic device on which the chip is mounted to perform the method of any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 9.
CN202310558777.9A 2023-05-17 2023-05-17 Image processing method, electronic device and storage medium Pending CN117726557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310558777.9A CN117726557A (en) 2023-05-17 2023-05-17 Image processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310558777.9A CN117726557A (en) 2023-05-17 2023-05-17 Image processing method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117726557A true CN117726557A (en) 2024-03-19

Family

ID=90209363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310558777.9A Pending CN117726557A (en) 2023-05-17 2023-05-17 Image processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117726557A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080055338A1 (en) * 2006-08-30 2008-03-06 Ati Technologies Inc. Multi-stage edge-directed image scaling
CN101188017A (en) * 2007-12-18 2008-05-28 上海广电集成电路有限公司 Digital image zooming method and system
CN101790069A (en) * 2010-03-09 2010-07-28 周艇 Scale transformation method based on image edge direction
CN104737199A (en) * 2012-10-24 2015-06-24 夏普株式会社 Image-processing device
CN104881843A (en) * 2015-06-10 2015-09-02 京东方科技集团股份有限公司 Image interpolation method and image interpolation apparatus
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
WO2016154966A1 (en) * 2015-04-01 2016-10-06 中国科学院自动化研究所 Method and system for image scaling based on edge self-adaptation
CN114529459A (en) * 2022-04-25 2022-05-24 东莞市兆丰精密仪器有限公司 Method, system and medium for enhancing image edge

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080055338A1 (en) * 2006-08-30 2008-03-06 Ati Technologies Inc. Multi-stage edge-directed image scaling
CN101188017A (en) * 2007-12-18 2008-05-28 上海广电集成电路有限公司 Digital image zooming method and system
CN101790069A (en) * 2010-03-09 2010-07-28 周艇 Scale transformation method based on image edge direction
CN104737199A (en) * 2012-10-24 2015-06-24 夏普株式会社 Image-processing device
WO2016154966A1 (en) * 2015-04-01 2016-10-06 中国科学院自动化研究所 Method and system for image scaling based on edge self-adaptation
CN104881843A (en) * 2015-06-10 2015-09-02 京东方科技集团股份有限公司 Image interpolation method and image interpolation apparatus
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
CN114529459A (en) * 2022-04-25 2022-05-24 东莞市兆丰精密仪器有限公司 Method, system and medium for enhancing image edge

Similar Documents

Publication Publication Date Title
KR102156597B1 (en) Optical imaging method and apparatus
US10497097B2 (en) Image processing method and device, computer readable storage medium and electronic device
CN109829864B (en) Image processing method, device, equipment and storage medium
CN114115619B (en) Application program interface display method and electronic equipment
CN111768416B (en) Photo cropping method and device
US10863077B2 (en) Image photographing method, apparatus, and terminal
CN112584251B (en) Display method and electronic equipment
KR20150077646A (en) Image processing apparatus and method
KR20160016068A (en) Method for generating image and electronic device thereof
EP3761297A1 (en) Data transmission method, apparatus, and system, and display apparatus
WO2023142915A1 (en) Image processing method, apparatus and device, and storage medium
CN110533019B (en) License plate positioning method and device and storage medium
CN113572980B (en) Photographing method and device, terminal equipment and storage medium
CN110414448B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112243117B (en) Image processing apparatus, method and camera
CN111901679A (en) Method and device for determining cover image, computer equipment and readable storage medium
CN114063962B (en) Image display method, device, terminal and storage medium
CN117726557A (en) Image processing method, electronic device and storage medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN108881739B (en) Image generation method, device, terminal and storage medium
CN113709479A (en) Decoding and encoding method based on adaptive intra-frame refreshing mechanism and related equipment
CN116681746B (en) Depth image determining method and device
CN108600500B (en) Picture display method and device, mobile terminal and storage medium
CN108536406B (en) Picture display method and device, mobile terminal and storage medium
CN117057995B (en) Image processing method, device, chip, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination