CN115965533A - Image processing method and device, storage medium and terminal equipment - Google Patents

Image processing method and device, storage medium and terminal equipment Download PDF

Info

Publication number
CN115965533A
CN115965533A CN202310002723.4A CN202310002723A CN115965533A CN 115965533 A CN115965533 A CN 115965533A CN 202310002723 A CN202310002723 A CN 202310002723A CN 115965533 A CN115965533 A CN 115965533A
Authority
CN
China
Prior art keywords
interpolation
difference
result
orthogonal direction
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310002723.4A
Other languages
Chinese (zh)
Inventor
袁汝俊
沈珈立
罗小伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202310002723.4A priority Critical patent/CN115965533A/en
Publication of CN115965533A publication Critical patent/CN115965533A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides an image processing method and device, a storage medium and a terminal device, wherein the image processing method comprises the following steps: acquiring an input image; for each pixel to be interpolated in the input image, performing interpolation by using neighboring pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and weighting the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than the difference of the initial interpolation weight corresponding to each orthogonal direction; and obtaining an output image according to the final interpolation result and the input image. According to the technical scheme, more details can be kept while the image is amplified, and the image quality is improved.

Description

Image processing method and device, storage medium and terminal equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and a terminal device.
Background
Conventional resolution conversion methods (such as nearest neighbor algorithm, bicubic interpolation algorithm, etc.) can simply adjust the resolution of an image when performing image processing.
However, the image processing algorithm in the prior art has the effects of little detail, jaggy at the edge, and ringing (edge hooking) at the sharp edge.
Disclosure of Invention
The application provides an image processing method and device, a storage medium and a terminal device, which can retain more details and improve the image quality while amplifying an image.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an image processing method is provided, and the image processing method includes: acquiring an input image; for each pixel to be interpolated in the input image, performing interpolation by using neighboring pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and weighting the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than the difference of the initial interpolation weight corresponding to each orthogonal direction; and obtaining an output image according to the final interpolation result and the input image.
Optionally, the first orthogonal direction is two diagonal directions, the second orthogonal direction is a horizontal direction and a vertical direction, the performing interpolation by using the neighboring pixels in the first orthogonal direction and the second orthogonal direction respectively to obtain an interpolation result, and the weighting the interpolation result by using the adjusted interpolation weight corresponding to each orthogonal direction respectively includes: performing interpolation by using the neighborhood pixels in the two diagonal directions to obtain a first initial interpolation result, and weighting the first initial interpolation result by using the adjusted interpolation weights corresponding to the two diagonal directions to obtain a first interpolation result, wherein the difference of the adjusted interpolation weights corresponding to the two diagonal directions is smaller than the difference of the initial interpolation weights corresponding to the two diagonal directions; obtaining an intermediate image according to the first interpolation result and the input image; and for each pixel to be interpolated in the intermediate image, interpolating by using the neighborhood pixels in the horizontal direction and the vertical direction to obtain a second initial interpolation result, and weighting the second initial interpolation result by using the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction to obtain a final interpolation result, wherein the difference of the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction is smaller than the difference of the initial interpolation weights corresponding to the horizontal direction and the vertical direction.
Optionally, the interpolation weight corresponding to each orthogonal direction is calculated by the following method: calculating initial interpolation weights corresponding to two sub-directions of each orthogonal direction, and performing normalization processing to obtain a normalization difference weight; calculating the difference of the normalized difference weights corresponding to the two sub-directions; adjusting the difference value so that the adjusted difference value is smaller than the difference value before adjustment; and calculating the corresponding adjusted interpolation weight of the two sub-directions of each orthogonal direction by using the adjusted difference value.
Optionally, the adjusting the difference value includes: and adjusting the difference value by adopting a difference value mapping curve, wherein the value of the dependent variable is smaller than the value of the independent variable in the difference value mapping curve.
Optionally, before obtaining an output image according to the final interpolation result and the input image, the method further includes: adjusting the final difference result to enable the final difference result to be within a target value range, or enabling the value difference between the adjusted final difference result and the target range to be smaller than the value difference between the final difference result and the target range before adjustment.
Optionally, the upper limit of the target value range is a pixel maximum value of a neighborhood pixel, and the lower limit of the target value range is a pixel minimum value of the neighborhood pixel; or the upper limit of the target value range is the sum of the pixel maximum value of the neighborhood pixels and the first adjustment value, and the lower limit of the target value range is the sum of the pixel minimum value of the neighborhood pixels and the second adjustment value.
Optionally, the obtaining an output image according to the final interpolation result and the input image includes: and rearranging the pixels according to the position of each pixel to be interpolated and the position of each pixel in the input image to obtain the output image.
In a second aspect, the present application further discloses an image processing apparatus comprising: the acquisition module is used for acquiring an input image; the interpolation module is used for carrying out interpolation on each pixel to be interpolated in the input image by respectively using adjacent pixels in a first orthogonal direction and a second orthogonal direction to obtain an interpolation result, and weighting the interpolation result by respectively using an adjusted interpolation weight corresponding to each orthogonal direction to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than that of an initial interpolation weight corresponding to each orthogonal direction; and the output module is used for obtaining an output image according to the final interpolation result and the input image.
In a third aspect, a computer-readable storage medium is provided, on which a computer program is stored, the computer program being executable by a processor for performing a method as provided in the first aspect.
In a fourth aspect, a computer program product is provided, on which a computer program is stored, the computer program being executable by a processor for performing a method as provided in the first aspect.
In a fifth aspect, a communication system is provided, which includes the terminal device and the network device.
In a sixth aspect, an embodiment of the present application further provides a chip (or a data transmission apparatus), where the chip stores a computer program, and when the computer program is executed by the chip, the steps of the method are implemented.
In a seventh aspect, an embodiment of the present application further provides a system chip applied in a terminal, where the system chip includes at least one processor and an interface circuit, the interface circuit and the at least one processor are interconnected by a line, and the at least one processor is configured to execute instructions to perform the method provided in the first aspect.
Compared with the prior art, the technical scheme of the embodiment of the application has the following beneficial effects:
in the technical scheme of the application, for each pixel to be interpolated in an input image, interpolation is performed by using adjacent pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and the interpolation result is weighted by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than the difference of an initial interpolation weight corresponding to each orthogonal direction; and obtaining an output image according to the final interpolation result and the input image. According to the technical scheme, the interpolation weight used in the directional interpolation is adjusted, so that a detail area with small difference of interpolation weight in the orthogonal direction can be ensured, and the problem of obvious detail loss can not occur after interpolation processing, so that the quality of an image after interpolation is improved. In addition, the image detail area processing effect strength can be flexibly adjusted, and the flexibility of image processing is improved.
Further, the final difference result is adjusted so that the final difference result is within the target value range, or the value difference between the adjusted final difference result and the target range is smaller than the value difference between the final difference result before adjustment and the target range. According to the technical scheme, the amplitude limiting mode that the pixel value of the neighborhood pixel of the to-be-interpolated point is used as the reference point can effectively guarantee that the problem of obvious over-brightness or over-darkness does not occur in the final difference result after adjustment, so that the ringing phenomenon caused by the interpolation process can be effectively controlled, and the image quality is further improved.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 2 is a flow chart of another image processing method provided by the embodiment of the application;
fig. 3 is a schematic diagram of a specific application scenario provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a mapping curve provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application.
Detailed Description
As described in the background, the image processing algorithms of the prior art have the effect problems of little detail, jaggy at the edge, ringing at a sharp edge (edge hooking), and the like.
According to the technical scheme, the interpolation weight used in the directional interpolation is adjusted, so that a detail area with small difference of interpolation weight in the orthogonal direction can be ensured, and the problem of obvious detail loss can not occur after interpolation processing, so that the quality of an image after interpolation is improved. In addition, the image detail area processing effect can be flexibly adjusted, and the flexibility of image processing is improved.
Furthermore, according to the technical scheme, the amplitude limiting mode that the pixel value of the neighborhood pixel of the to-be-interpolated point is used as the reference point can effectively ensure that the problem of obvious over-brightness or over-darkness of the final difference value result after adjustment can not occur, so that the ringing phenomenon caused by the interpolation process can be effectively controlled, and the image quality is further improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the present application are described in detail below.
The image processing method in the embodiment of the application can be used in image processing chips related to image acquisition, image post-processing, image display and the like, and can be particularly used in equipment such as a camera, an image processing platform, display equipment and the like.
Referring to fig. 1, the method provided by the present application includes:
step 101: acquiring an input image;
step 102: for each pixel to be interpolated in the input image, performing interpolation by using adjacent pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and weighting the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weights corresponding to each orthogonal direction is smaller than the difference of the initial interpolation weights corresponding to each orthogonal direction;
step 103: and obtaining an output image according to the final interpolation result and the input image.
It should be noted that the sequence numbers of the steps in this embodiment do not represent a limitation on the execution sequence of the steps.
It will be appreciated that in a specific implementation, the image processing method may be implemented in the form of a software program running on a processor integrated within a chip or chip module. The method may also be implemented by combining software and hardware, and the present application is not limited thereto.
In this embodiment, the input image is an image to be processed, that is, an image that needs to be amplified. The output image is an enlarged image, and the resolution of the output image is higher than that of the input image. Alternatively, the input image may be referred to as a Low-resolution (LR) image, and the output image may be referred to as a High-resolution (HR) image.
In order to enlarge the input image, it is necessary to insert pixels (i.e., pixels to be interpolated) into the input image and determine pixel values of the pixels to be interpolated. In a specific implementation of step 102, for each pixel to be interpolated, interpolation may be performed in a first orthogonal direction and a second orthogonal direction, respectively.
In a specific implementation, the first orthogonal directions are two diagonal directions, namely, 45 ° direction and 135 ° direction. The second orthogonal direction is the horizontal direction and the vertical direction, i.e., the 0 ° and 90 ° directions.
The specific process of interpolation can be referred to fig. 2.
In step 201, a first initial interpolation result is obtained by interpolating neighboring pixels in two diagonal directions, and the first initial interpolation result is weighted by using adjusted interpolation weights corresponding to the two diagonal directions, so as to obtain the first interpolation result, where a difference between the adjusted interpolation weights corresponding to the two diagonal directions is smaller than a difference between the initial interpolation weights corresponding to the two diagonal directions.
In step 202, an intermediate image is obtained from the first interpolation result and the input image.
In step 203, for each pixel to be interpolated in the intermediate image, performing interpolation using neighboring pixels in the horizontal direction and the vertical direction to obtain a second initial interpolation result, and weighting the second initial interpolation result using the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction to obtain a final interpolation result, where a difference between the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction is smaller than a difference between the initial interpolation weights corresponding to the horizontal direction and the vertical direction.
Referring to FIG. 3, the interpolation process in the first orthogonal direction is described by taking the pixel to be interpolated (2i +1,2j + 1), which is also the black pixel point in FIG. 3 a.
Specifically, the input image IMGLR is image-interpolated using neighborhood pixels of the pixel to be interpolated (2i +1,2j + 1) in the directions of 45 ° and 135 °, such as four neighborhood pixels (2i, 2j), (2i +2, 2j), (2i, 2j + 2) and (2i +2,2j + 2) shown in fig. 3a, to obtain first initial interpolation results I45 and I135 in two directions, respectively. Specifically, the first initial interpolation result I45 may be obtained by using an existing interpolation algorithm for the neighborhood pixels in the 45 ° direction, and correspondingly, the first initial interpolation result I135 may be obtained by using an existing interpolation algorithm for the neighborhood pixels in the 135 ° direction.
It should be noted that, as to the specific implementation manner for obtaining the first initial interpolation result by using the specific difference algorithm, reference may be made to the prior art, and the application is not limited thereto.
And calculating the texture direction of the pixel to be interpolated by using the neighborhood pixels, and respectively calculating to obtain initial interpolation weights W45 and W135 corresponding to the first orthogonal direction. Since the initial interpolation weights W45 and W135 are independently calculated, the sum of both may be larger than 1, and thus the initial interpolation weight may be subjected to the normalization process. Normalized interpolation weight W cal_45 And W cal_135 The following were used:
Figure BDA0004035805290000061
Figure BDA0004035805290000062
in step 201, the normalized interpolation weights corresponding to two diagonal directions may be adjusted to obtain the adjusted interpolation weight. Adjusting interpolation weight W corresponding to two diagonal directions adj_45 And W adj_135 The sum is 1.
In particular, the normalized interpolation weight W may be mapped using a curve cal_45 And W cal_135 Difference diff of cal_45-135 Is adjusted. Referring also to FIG. 4, FIG. 4 illustrates the normalization of the interpolation weight W cal_45 And W cal_135 The difference value used for the adjustment of the difference value mapping curve. In the difference value mapping curve, the value of the dependent variable is smaller than the value of the independent variable. That is, the adjusted difference diff is obtained by the difference mapping curve adj_45-135 Less than the original difference diff cal_45-135
diff adj_45-135 =sign(diff cal_45-135 )×mapping(abs(diff cal_45-135 ))。
Further, the interpolation weight W is adjusted adj_45 And W adj_135 As shown in the following equation:
Figure BDA0004035805290000071
Figure BDA0004035805290000072
Figure BDA0004035805290000073
in step 201, the first initial interpolation results I45 and I135 and the adjusted interpolation weight W are used adj_45 And W adj_135 Weighted summation is performed to obtain the pixel value of the pixel to be interpolated (2i +1,2j + 1), i.e. the first interpolation result Iout _ tmp _1: i is out_tmp_1 =W adj_45 ×I 45 +W adj_135 ×I 135
According to the embodiment of the application, the interpolation weight of the texture is calculated according to the orthogonal direction, and the interpolation weights in the two directions are processed in a segmented mode, so that when the interpolation weight difference is small (at the moment, the texture of an input image has no definite direction), the difference of the interpolation weights is further reduced. By this strategy, the content of the low-directivity texture can be prevented from being strengthened, thereby avoiding the problem of detail blurring.
Further, the first interpolation result Iout _ tmp _1 may be subjected to amplitude limiting processing, so that the adjusted first interpolation result is within the target value range, or a value difference between the adjusted first interpolation result and the target range is smaller than a value difference between the adjusted first interpolation result before adjustment and the target range.
Specifically, the amplitude of the first interpolation result Iout _ tmp _1 is limited by referring to the neighborhood pixels of the pixel to be interpolated in the directions of 45 ° and 135 °, so as to obtain the adjusted first interpolation result Iout1, thereby reducing the ringing problem caused by the interpolation process. The method of amplitude-limiting the first interpolation result Iout _ tmp _1 includes the following ways:
in the method 1, if the range of the pixel value of the neighboring pixel is [ min, max ], the first interpolation result Iout _ tmp _1 is directly pruned to obtain the adjusted first interpolation result Iout1, so that the first interpolation result Iout1 is limited in the interval.
And 2, pruning the first interpolation result Iout _ tmp _1 to obtain the adjusted first interpolation result Iout1, so that the first interpolation result Iout1 is limited in the [ min + unsht _ range, max + ovsht _ range ] interval.
Mode 3, the first interpolation result Iout _ tmp _1 exceeds [ min, max [ ]]Part of the range is
Figure BDA0004035805290000081
And multiplying and scaling to obtain an adjusted first interpolation result Iout1. The parameter scale _ sht may be preset.
In the method 4, the method 3 may be used first, and then the sending unit 2 is used to adjust the first interpolation result Iout _ tmp _1 to obtain the adjusted first interpolation result Iout1.
According to the embodiment of the application, the first interpolation result Iout _ tmp _1 is subjected to amplitude limiting, so that the interpolated pixels can be prevented from exceeding the numerical range of the adjacent pixels, and the purpose of eliminating image ringing is achieved.
In the specific implementation manner of step 202, the input image IMGLR and the interpolated pixel point Iout1 are arranged according to the corresponding pixel position shown in fig. 3a to obtain an intermediate image IMGmed.
Similar to step 201, in a specific implementation of step 203, the pixel to be interpolated is interpolated in the 0 ° and 90 ° directions. Referring also to FIG. 3b, the interpolation process in the second orthogonal direction is illustrated by taking the pixel to be interpolated (2i, 2j + 1) as an example.
Specifically, the intermediate image IMGmed is image-interpolated using the neighboring pixels of the pixel to be interpolated (2i, 2j + 1) in the directions of 0 ° and 90 °, for example, four neighboring pixels (2i, 2j), (2i, 2j + 2), (2I-1, 2j + 1), (2i +1,2j + 1) shown in fig. 3b, to obtain second initial interpolation results I0 and I90 in two directions, respectively. Specifically, the second initial interpolation result I0 may be obtained by using an existing interpolation algorithm for the neighboring pixels in the 0 ° direction, and correspondingly, the second initial interpolation result I90 may be obtained by using an existing interpolation algorithm for the neighboring pixels in the 90 ° direction.
And calculating the texture direction of the pixel to be interpolated by using the neighborhood pixel, and respectively calculating to obtain initial interpolation weights W0 and W90 corresponding to the second orthogonal direction. Since the initial interpolation weights W0 and W90 are independently calculated, the sum of both may be larger than 1, and thus the initial interpolation weights may be subjected to normalization processing. Normalized interpolation weight W cal_0 And W cal_90 The following were used:
Figure BDA0004035805290000082
Figure BDA0004035805290000083
in step 203, the normalized interpolation weights corresponding to the horizontal direction and the vertical direction may be adjusted to obtainThe interpolation weights are adjusted. Adjusting interpolation weight W corresponding to horizontal direction and vertical direction adj_0 And W adj_90 The sum is 1.
In particular, the normalized interpolation weight W can be mapped by using a curve cal_0 And W cal_90 Difference diff of cal_0-90 Is adjusted. Referring also to FIG. 4, FIG. 4 illustrates the normalization of the interpolation weight W cal_0 And W cal_90 The used difference mapping curve is adjusted. In the difference mapping curve, the value of the dependent variable is smaller than the value of the independent variable. That is, the adjusted difference diff is obtained by the difference mapping curve adj_0-90 Less than the original difference diff cal_0-90
diff adj_0-90 =sign(diff cal_0-90 )×mapping(abs(diff cal_0-90 ))。
Further, the interpolation weight W is adjusted adj_0 And W adj_90 As shown in the following equation:
Figure BDA0004035805290000091
Figure BDA0004035805290000092
Figure BDA0004035805290000093
in step 203, the second initial interpolation results I0 and I90 are used and the interpolation weight W is adjusted adj_0 And W adj_90 Weighted summation is carried out to obtain the pixel value of the pixel to be interpolated (2i, 2j + 1), namely the final interpolation result Iout _ tmp _2: i is out_tmp_2 =W adj_0 ×I 0 +W adj_90 ×I 90
According to the embodiment of the application, the interpolation weight of the texture is calculated according to the orthogonal direction, and the interpolation weights in the two directions are processed in a segmented mode, so that when the interpolation weight difference is small (at the moment, the texture of an input image has no definite direction), the difference of the interpolation weights is further reduced. By this strategy, the content of the low-directivity texture can be prevented from being strengthened, thereby avoiding the problem of detail blurring.
Further, the final interpolation result Iout _ tmp _2 may be subjected to the amplitude limiting processing according to the foregoing modes 1 to 4, so that the adjusted final interpolation result I is obtained out2 And the final interpolation result is within the target value range, or the value difference between the adjusted final interpolation result and the target range is smaller than the value difference between the adjusted final interpolation result and the target range before adjustment.
Further, the interpolation results Iout1 and Iout2 obtained in the above process and the pixel points in the input image IMGLR are arranged according to the corresponding positions shown in fig. 3, so as to obtain a final output image IMGHR.
For more specific implementation manners of the embodiments of the present application, please refer to the foregoing embodiments, which are not described herein again.
Referring to fig. 5, fig. 5 shows an image processing apparatus 50, and the image processing apparatus 50 may include:
an obtaining module 501, configured to obtain an input image;
an interpolation module 502, configured to perform interpolation on each pixel to be interpolated in the input image by using neighboring pixels in the first orthogonal direction and the second orthogonal direction to obtain an interpolation result, and perform weighting on the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction to obtain a final interpolation result, where a difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than a difference of the initial interpolation weight corresponding to each orthogonal direction;
and an output module 503, configured to obtain an output image according to the final interpolation result and the input image.
In a specific implementation, the image processing apparatus 50 may correspond to a Chip having an image processing function in a terminal device, such as a System-On-a-Chip (SOC), a baseband Chip, or the like; or the terminal equipment comprises a chip module with an image processing function; or to a chip module having a chip with data processing function, or to a terminal device.
Other relevant descriptions about the image processing apparatus 50 can refer to the relevant descriptions in the foregoing embodiments, and are not repeated herein.
According to the embodiment of the application, the interpolation weight used in the directional interpolation is adjusted, so that a detailed area with small difference of the interpolation weight in the orthogonal direction can be ensured, and the problem of obvious detail loss can not occur after the interpolation processing, thereby improving the quality of an image after the interpolation. In addition, the image detail area processing effect strength can be flexibly adjusted, and the flexibility of image processing is improved.
Each module/unit included in each apparatus and product described in the above embodiments may be a software module/unit, or may also be a hardware module/unit, or may also be a part of a software module/unit and a part of a hardware module/unit. For example, for each apparatus and product applied to or integrated into a chip, each module/unit included in the apparatus and product may all be implemented by hardware such as a circuit, or at least a part of the modules/units may be implemented by a software program running on a processor integrated within the chip, and the remaining (if any) part of the modules/units may be implemented by hardware such as a circuit; for each device or product applied to or integrated with the chip module, each module/unit included in the device or product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least some of the modules/units may be implemented by using a software program running on a processor integrated within the chip module, and the rest (if any) of the modules/units may be implemented by using hardware such as a circuit; for each apparatus and product applied to or integrated in the terminal device, each module/unit included in the apparatus and product may all be implemented by hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal device, or at least part of the modules/units may be implemented by a software program running on a processor integrated inside the terminal device, and the rest (if any) part of the modules/units may be implemented by hardware such as a circuit.
The embodiment of the application also discloses a storage medium, which is a computer-readable storage medium, and a computer program is stored on the storage medium, and when the computer program runs, the steps of the method shown in fig. 1 to 3 can be executed. The storage medium may include a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like. The storage medium may further include a non-volatile (non-volatile) memory or a non-transitory (non-transient) memory, etc.
Referring to fig. 6, an embodiment of the present application further provides a hardware structure diagram of a communication device. The apparatus includes a processor 601, a memory 602, and a transceiver 603.
The processor 601 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure. The processor 601 may also include a plurality of CPUs, and the processor 601 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, or processing cores that process data (e.g., computer program instructions).
The memory 602 may be a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, and is not limited in any way by the embodiments of the present application. The memory 602 may be separate (in which case the memory 602 may be external to the device or internal to the device) or may be integrated with the processor 601. The memory 602 may have computer program code embodied therein. The processor 601 is configured to execute the computer program code stored in the memory 602, thereby implementing the methods provided by the embodiments of the present application.
The processor 601, the memory 602 and the transceiver 603 are connected by a bus. The transceiver 603 is used to communicate with other devices or communication networks. Optionally, the transceiver 603 may include a transmitter and a receiver. The means in the transceiver 603 for performing the receiving function may be regarded as a receiver for performing the receiving step in the embodiments of the present application. The means for implementing the transmitting function in the transceiver 603 may be regarded as a transmitter for performing the steps of transmitting in the embodiments of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more.
The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application.
The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions described in accordance with the embodiments of the present application are produced in whole or in part when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, the division of the unit is only a logic function division, and there may be another division manner in actual implementation; for example, various elements or components may be combined or may be integrated in another system or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the methods described in the embodiments of the present application.
Although the present application is disclosed above, the present application is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present disclosure, and it is intended that the scope of the present disclosure be defined by the appended claims.

Claims (10)

1. An image processing method applied to a terminal device is characterized by comprising the following steps:
acquiring an input image;
for each pixel to be interpolated in the input image, performing interpolation by using neighboring pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and weighting the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than the difference of the initial interpolation weight corresponding to each orthogonal direction;
and obtaining an output image according to the final interpolation result and the input image.
2. The image processing method according to claim 1, wherein the first orthogonal directions are two diagonal directions, the second orthogonal directions are a horizontal direction and a vertical direction, and the obtaining interpolation results by interpolating using neighboring pixels in the first orthogonal direction and the second orthogonal direction, respectively, and weighting the interpolation results by using the adjusted interpolation weight corresponding to each orthogonal direction, respectively, comprises: performing interpolation by using neighborhood pixels in the two diagonal directions to obtain a first initial interpolation result, and weighting the first initial interpolation result by using adjustment interpolation weights corresponding to the two diagonal directions to obtain a first interpolation result, wherein the difference of the adjustment interpolation weights corresponding to the two diagonal directions is smaller than the difference of the initial interpolation weights corresponding to the two diagonal directions;
obtaining an intermediate image according to the first interpolation result and the input image;
and for each pixel to be interpolated in the intermediate image, interpolating by using the neighborhood pixels in the horizontal direction and the vertical direction to obtain a second initial interpolation result, and weighting the second initial interpolation result by using the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction to obtain a final interpolation result, wherein the difference of the adjusted interpolation weights corresponding to the horizontal direction and the vertical direction is smaller than the difference of the initial interpolation weights corresponding to the horizontal direction and the vertical direction.
3. The image processing method according to claim 1, wherein the adjusted interpolation weight corresponding to each orthogonal direction is calculated by:
calculating initial interpolation weights corresponding to two sub-directions of each orthogonal direction, and performing normalization processing to obtain a normalization difference weight;
calculating the difference of the normalized difference weights corresponding to the two sub-directions;
adjusting the difference value so that the adjusted difference value is smaller than the difference value before adjustment;
and calculating the corresponding adjusted interpolation weight of the two sub-directions of each orthogonal direction by using the adjusted difference value.
4. The image processing method of claim 3, wherein the adjusting the difference value comprises:
and adjusting the difference value by adopting a difference value mapping curve, wherein the value of the dependent variable is smaller than the value of the independent variable in the difference value mapping curve.
5. The image processing method of claim 1, wherein obtaining an output image from the final interpolation result and the input image further comprises:
adjusting the final difference result to enable the final difference result to be within a target value range, or enabling the value difference between the adjusted final difference result and the target range to be smaller than the value difference between the final difference result and the target range before adjustment.
6. The image processing method according to claim 5, wherein an upper limit of the target numerical range is a pixel maximum value of a neighborhood pixel, and a lower limit of the target numerical range is a pixel minimum value of a neighborhood pixel; or the upper limit of the target value range is the sum of the pixel maximum value of the neighborhood pixels and the first adjustment value, and the lower limit of the target value range is the sum of the pixel minimum value of the neighborhood pixels and the second adjustment value.
7. The image processing method according to claim 1, wherein said obtaining an output image from the final interpolation result and the input image comprises:
and rearranging the pixels according to the position of each pixel to be interpolated and the position of each pixel in the input image to obtain the output image.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an input image;
the interpolation module is used for interpolating each pixel to be interpolated in the input image by using neighboring pixels in a first orthogonal direction and a second orthogonal direction respectively to obtain an interpolation result, and weighting the interpolation result by using an adjusted interpolation weight corresponding to each orthogonal direction respectively to obtain a final interpolation result, wherein the difference of the adjusted interpolation weight corresponding to each orthogonal direction is smaller than the difference of the initial interpolation weight corresponding to each orthogonal direction;
and the output module is used for obtaining an output image according to the final interpolation result and the input image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
10. A terminal device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor executes the computer program to perform the steps of the image processing method according to any one of claims 1 to 7.
CN202310002723.4A 2023-01-03 2023-01-03 Image processing method and device, storage medium and terminal equipment Pending CN115965533A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310002723.4A CN115965533A (en) 2023-01-03 2023-01-03 Image processing method and device, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310002723.4A CN115965533A (en) 2023-01-03 2023-01-03 Image processing method and device, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN115965533A true CN115965533A (en) 2023-04-14

Family

ID=87361235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310002723.4A Pending CN115965533A (en) 2023-01-03 2023-01-03 Image processing method and device, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN115965533A (en)

Similar Documents

Publication Publication Date Title
EP1347410B1 (en) Edge-based enlargement and interpolation of images
US20050243103A1 (en) Novel method to quickly warp a 2-D image using only integer math
CN109214996B (en) Image processing method and device
JP5653104B2 (en) Image processing apparatus, image processing method, and program
CN111724304B (en) Image scaling method and device, terminal equipment and storage medium
CN107451978B (en) Image processing method, device and equipment
US20220130020A1 (en) Image processing method and apparatus, video processing method and apparatus, electronic device, and storage medium
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN112801879A (en) Image super-resolution reconstruction method and device, electronic equipment and storage medium
US8743419B2 (en) Image processing apparatus and method converting low-resolution image to high-resolution using signal value patterns
CN115965533A (en) Image processing method and device, storage medium and terminal equipment
CN116912556A (en) Picture classification method and device, electronic equipment and storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
KR101582578B1 (en) Apparatus and methdo for processing graphic
CN112419146B (en) Image processing method and device and terminal equipment
CN115809959A (en) Image processing method and device
CN111583111B (en) Dynamic range image compression method, computer equipment and storage device
JP4104475B2 (en) Contour correction device
CN110910439B (en) Image resolution estimation method and device and terminal
CN114882125A (en) Method and device for graying image data, terminal device and readable storage medium
CN111815547A (en) Image processing method and device, electronic device and storage medium
CN111986144A (en) Image blur judgment method and device, terminal equipment and medium
KR20170000869A (en) Method of image processing, image processor performing the method and display device having the image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination