CN116320204A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116320204A
CN116320204A CN202211727831.XA CN202211727831A CN116320204A CN 116320204 A CN116320204 A CN 116320204A CN 202211727831 A CN202211727831 A CN 202211727831A CN 116320204 A CN116320204 A CN 116320204A
Authority
CN
China
Prior art keywords
pixel
processed
image
pixel point
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211727831.XA
Other languages
Chinese (zh)
Inventor
张仲华
胡均浩
葛维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisoc Chongqing Technology Co Ltd
Original Assignee
Unisoc Chongqing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisoc Chongqing Technology Co Ltd filed Critical Unisoc Chongqing Technology Co Ltd
Priority to CN202211727831.XA priority Critical patent/CN116320204A/en
Publication of CN116320204A publication Critical patent/CN116320204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N3/00Scanning details of television systems; Combination thereof with generation of supply voltages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The embodiment of the application provides an image processing method and device, which are applied to the field of image processing. The method comprises the following steps: determining a target area meeting a saw-tooth condition in an image to be processed; determining a to-be-processed area corresponding to the target area in the to-be-processed image; performing low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated; and obtaining a processed image corresponding to the image to be processed according to the processed area. After the image area meeting the saw-tooth condition in the image to be processed is detected, the image area meeting the saw-tooth condition is processed, so that the processed picture is clearer, and the processing effect is improved.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus thereof.
Background
With the development of communication technology, progressive television is becoming popular. In order to enable normal viewing of television programs on a progressive television, the television may process both interlaced and non-interlaced video, by which is meant video composed of interlaced (interlaced) field signals, e.g. interlaced video composed of two field signals, the signals of which constitute each image frame of the video, each field signal comprising half the number of horizontal lines in one image frame, e.g. one field signal comprising all odd lines and the other field signal comprising all even lines. And non-interlaced video refers to video composed of progressive signals. The process by which electrons can convert received interlaced video into non-interlaced video is called de-interlacing. At present, in the process of de-interlacing, a television obtains a video comprising saw teeth after de-interlacing aiming at a moving object, and the display effect is poor. In general, a television performs filtering processing on a video including saw teeth, so that a processed picture is blurred, and a processing effect is poor.
Therefore, how to process the image after the de-interlacing process to improve the processing effect is a technical problem to be solved currently.
Disclosure of Invention
The application discloses an image processing method and device, wherein after an image area meeting a saw-tooth condition in an image to be processed is detected, the image area meeting the saw-tooth condition is processed, so that a processed picture is clearer, and the processing effect is improved.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes: determining a target area meeting a saw-tooth condition in an image to be processed; determining a to-be-processed area corresponding to the target area in the to-be-processed image; carrying out low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated; and determining the processed image corresponding to the image to be processed according to the processed area.
In the embodiment of the application, the target area meeting the saw-tooth condition is determined in the image to be processed, so that the area to be processed corresponding to the target area can be determined, the area to be processed can be subjected to low-pass filtering processing after the area to be processed is determined, the processed area corresponding to the area to be processed is obtained, and finally, the processed image corresponding to the image to be processed is determined according to the processed area. Therefore, after the image area meeting the saw-tooth condition is detected in the image to be processed, only the image area meeting the saw-tooth condition is processed, so that the images in other areas can be ensured to be clearer, the image area meeting the saw-tooth condition is processed, and the overall processing effect of the image to be processed is improved.
In an optional implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the determining, in the image to be processed, the target area satisfying the jaggy condition includes: when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region. In the embodiment of the application, whether the image area with the cross-stripe saw-tooth line exists or not can be determined through the pixel values of four different pixel points in the image to be processed, and the four pixel points can be used as detection windows, so that a plurality of target areas in the image to be processed can be determined, and further processing is performed, so that the image area which is clear originally and does not have the cross-stripe saw-tooth line can be prevented from being blurred due to low-pass filtering processing, and the processing effect is improved.
In an optional embodiment, the determining, in the image to be processed, the target area that meets the jaggy condition includes: and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area. In the embodiment of the application, in judging whether the image area with the cross lines and the zigzag lines exists in the image to be processed, besides judging according to the pixel values of the pixel points in the window, the image area can be judged together according to the received external input signals, so that the judging accuracy is improved, different images to be processed can be emphasized, the processing effect can be improved, and the processed image display effect is better.
In an optional embodiment, the first pixel, the second pixel, the third pixel, and the fourth pixel are four adjacent pixels in the same column in the image to be processed. In this embodiment of the present application, the above-mentioned pixel points in the image area for determining whether the image to be processed includes the cross-stripe zigzag line may be pixel points in the same column in the image, that is, the above-mentioned detection window may be determined for a column by column, so that the accuracy of the determination may be improved, and the effect of image processing may be improved.
In an optional embodiment, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the method further comprises the following steps: the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value; and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold; the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point. In this embodiment of the present application, besides determining whether there is a cross-stripe zigzag line according to the pixel values corresponding to the four pixel points, the window to be determined may be increased according to the pixel values corresponding to the four pixel points or the pixel values corresponding to eight pixel points adjacent to the four pixel points, so that different images to be processed may be pertinently determined and processed, thereby improving the processing effect and improving the display effect.
In an optional implementation manner, in the image to be processed, determining a region to be processed corresponding to the target region includes: acquiring a first preset parameter and a second preset parameter; determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters; determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode; acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information; and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area. In this embodiment of the present application, the area to be processed may be a window area determined as a cross-stripe zigzag line, or may include other areas, specifically may determine the area to be processed according to preset parameters in a plurality of registers in the electronic device, and further only perform low-pass filtering processing on the determined area to be processed, so that the overall definition of the processed image may be improved, thereby improving the processing effect.
In an optional embodiment, the performing low-pass filtering on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area includes: processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter; wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value. In the embodiment of the application, in the process of performing low-pass filtering processing on the area to be processed, the electronic device can process the pixel value corresponding to the pixel point according to the pixel value corresponding to the adjacent pixel point, so that the pixel value of the pixel point can be updated to complete the low-pass filtering processing, further, a processed image is obtained, and the display effect of the image to be processed can be improved.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: a determining unit configured to determine, in an image to be processed, a target area satisfying a jaggy condition; the determining unit is further configured to determine, in the image to be processed, a region to be processed corresponding to the target region; the processing unit is used for carrying out low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area; the determining unit is further configured to determine a processed image corresponding to the image to be processed according to the processed region.
In one possible implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the determining unit is configured to determine, in an image to be processed, a target area that satisfies a jaggy condition, and is specifically configured to: when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region.
In a possible implementation manner, the determining unit is configured to determine, in an image to be processed, a target area that meets a jaggy condition, and specifically is configured to: and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area.
In one possible implementation manner, the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point are four adjacent pixel points in the same column in the image to be processed.
In one possible implementation manner, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the above-mentioned determination unit is further used for: the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value; and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold; the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
In a possible implementation manner, the determining unit is configured to determine, in the image to be processed, a region to be processed corresponding to the target region, and specifically is configured to: acquiring a first preset parameter and a second preset parameter; determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters; determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode; acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information; and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
In a possible implementation manner, the processing unit is configured to perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area, and specifically is configured to: processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter; wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
In a third aspect, embodiments of the present application provide another image processing apparatus, including a processor; the processor is configured to perform the method of the first aspect.
In a fourth aspect, an embodiment of the present application provides a chip, where the chip includes a memory and a processor, where the memory stores a computer program, where the computer program includes program instructions, and where the processor is configured to execute the program instructions to implement a method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip module, the chip module including a communication interface and a chip, wherein: the communication interface is used for carrying out internal communication of the chip module or carrying out communication between the chip module and external equipment; the chip is for performing the method of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program comprising program instructions that, when executed, cause the method of the first aspect to be implemented.
In a seventh aspect, embodiments of the present application provide a computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the method as described above in the first aspect.
Drawings
Fig. 1 is a schematic architecture diagram of an image processing system according to an embodiment of the present application.
Fig. 2 is a schematic view of the processing effect of an image processing method.
Fig. 3 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 4 is an image schematic diagram of determining a target area and a to-be-processed area in an image to be processed according to an embodiment of the present application.
Fig. 5 is another image schematic diagram of determining a target area and a to-be-processed area in an image to be processed according to an embodiment of the present application.
Fig. 6 is a schematic view of another image for determining a target area and a region to be processed in an image to be processed according to an embodiment of the present application.
Fig. 7 is a schematic view of still another image for determining a target area and a region to be processed in an image to be processed according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a low-pass filtered image according to an embodiment of the present application.
Fig. 9 is another image schematic diagram of a low-pass filtering provided in an embodiment of the present application.
Fig. 10 is a schematic view of a processing effect of an image processing method according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural view of another image processing apparatus provided in the embodiment of the present application;
Fig. 13 is a schematic structural diagram of a chip module according to an embodiment of the present application.
Detailed Description
It should be understood that the terms "first," "second," and the like, as used in embodiments of the present application, are used for distinguishing between different objects and not for describing a particular sequential order. The term "at least one" in the embodiments of the present application refers to one or more, and the term "a plurality" refers to two or more. In the embodiment of the present application, "and/or" describes the association relationship of the association object, which indicates that three relationships may exist, for example, a and/or B may indicate the following three cases: a is present alone, while A and B are present together, and B is present alone. Wherein A, B can be singular or plural. The character "/" may indicate that the context-dependent object is an "or" relationship.
The expression "at least one item(s)" or the like below in the embodiments of the present application refers to any combination of these items, including any combination of single item(s) or plural item(s). For example, at least one (one) of a, b or c may represent the following seven cases: a, b, c, a and b, a and c, b and c, a, b and c. Wherein each of a, b, c may be an element or a set comprising one or more elements. In addition, "not less than" in the embodiments of the present application means "greater than" or "equal to".
First, before explaining the embodiments of the present application in further detail, terms and terminology involved in the embodiments of the present application are explained for the convenience of those skilled in the art. The terms and terminology involved in the embodiments of the present application apply to the following explanation:
1. pixel (pixel)
Pixels, also called pixels, divide an image into a plurality of small squares, each of which is called a pixel, and a computer displays the whole image by indicating the position, color, brightness, etc. of the pixels. Pixels are defined as being made up of tiles of an image, each of which has a distinct location and assigned color value, the color and location of the tiles determining what the image appears to be. A pixel may be considered an indivisible unit or element in the entire image. By indivisible is meant that it cannot be re-cut into smaller units or elements, which are present in a single color cell. Each dot matrix image contains a certain number of pixels that determine the size of the image presented on the screen.
In the application, the image to be processed comprises a plurality of pixel points, at least one pixel point forms an image area meeting the saw-tooth condition, the image area can present saw-tooth stripe lines, and at least one pixel point also forms the area to be processed.
2. Pixel value
The pixel value is a value given by a computer when an original image is digitized, and represents average luminance information of a certain small square of the original, or average reflection (transmission) density information of the small square. When converting a digital image into a halftone image, the dot area ratio (dot percentage) has a direct relationship with the pixel value (gray value) of the digital image, that is, the dot represents the average brightness information of a small square of the original by its size.
In the application, the image to be processed includes a plurality of pixel points, each pixel point corresponds to a pixel value, and whether the image to be processed includes a target area can be determined according to the pixel values of the plurality of pixel points in the image to be processed, wherein the target area can be an image area formed by the plurality of pixel points meeting the jaggy condition.
3. De-interleaving (De-interface)
De-interlacing, which may also be referred to as de-interlacing, refers to a method of converting an interlaced (i.e., interlaced) image signal into a progressive (progressive) image signal. The interlaced scanning and the progressive scanning are two different video formats, such as 1080P or 1080I, and when the electronic device receives the video of I, a de-interlacing process is needed, so that the video is converted into a progressive scanning signal to be played.
In the application, the image to be processed can be obtained through de-interlacing, and since the image to be processed possibly includes a moving object, there may be a jagged stripe-shaped image in the image to be processed obtained after de-interlacing, the display effect is poor, and if the whole image is subjected to filtering processing, the whole image becomes blurred, and the processing effect is poor.
4. Currently processed image (Current Frame)
The current processing image refers to an image to be subjected to de-interlacing at present, and because the video is a continuous image frame, a plurality of images are sequentially processed to obtain the de-interlaced video.
In the application, sequential de-interlacing processing is performed on continuous image frames in a video to obtain a de-interlaced video, each image frame in the de-interlaced video can be an image to be processed, and whether a target area meeting a saw-tooth condition exists in the de-interlaced video can be detected to further process.
5. Previous image (Previous Frame)
Since the currently processed video includes a plurality of consecutive image frames, the processing may be sequentially performed, and a Previous image of the currently processed image (Current Frame) may be a Previous image (Previous Frame).
In the application, each image frame in the de-interlaced video is an image to be processed, and whether a target area meeting the saw-tooth condition exists or not can be detected for each image frame in sequence, so that before the current processed image is used as the image to be processed, the previous image can be used as the image to be processed.
6. Filter (Mesh filter)
Filtering is a fundamental operation in signal and image processing to selectively extract aspects of the image, e.g., filtering may remove noise from the image, extract useful visual features, etc. The noise of the target image is suppressed under the condition of keeping the detail characteristics of the image as far as possible, which is an indispensable operation in the image preprocessing, and the effectiveness and reliability of the subsequent image processing and analysis are directly affected by the processing effect.
In the application, the region to be processed corresponding to the target region meeting the saw-tooth condition in the image to be processed can be processed through the filter, for example, the processed region corresponding to the region to be processed can be obtained through low-pass filtering processing, and then the processed image can be obtained, so that the cross-grain saw-tooth line generated after de-interlacing is eliminated, the picture after de-interlacing is attractive, and the image after de-interlacing is kept clear.
Referring to fig. 1, fig. 1 is a schematic system architecture diagram of an image processing system according to an embodiment of the present application. The image processing system may include a first electronic device 100 and a second electronic device 200.
It should be noted that, the image processing system may include, but is not limited to, two electronic devices, and the number and the form of the devices shown in fig. 1 are used for illustration and not to limit the embodiments of the present application, and may include more than two electronic devices in practical application. The image processing system shown in fig. 1 is illustrated by taking two electronic devices (a first electronic device 100 and a second electronic device 200) as an example. The first electronic device 100 and the second electronic device 200 may directly or indirectly communicate, where the first electronic device 100 may be an electronic device that performs de-interlacing on an image to be processed, and further may send the image to be processed to the second electronic device 200, and the second electronic device 200 performs detection and further filtering processing on the image to be processed, for example, detects whether the image to be processed includes a condition satisfying saw-tooth, and if so, may perform low-pass filtering processing on an area to be processed corresponding to the target area.
The electronic devices (such as the first electronic device 100 and the second electronic device 200) may be, but not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The electronic devices (e.g., the first electronic device 100 and the second electronic device 200) may be servers, for example, independent physical servers, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. It should be noted that the electronic devices (e.g., the first electronic device 100 and the second electronic device 200) may provide a de-interlacing function and an image processing function, and optionally, the electronic devices (e.g., the first electronic device 100 and the second electronic device 200) may be electronic devices with a video playing function. The first electronic device 100 and the second electronic device 200 may be the same electronic device or may be different electronic devices, which is not limited in this application.
Because the electronic equipment includes the cross-stripe saw-tooth image area in the video obtained after the de-interlacing processing aiming at the moving object in the de-interlacing processing, the display effect is not good. Generally, the electronic device can perform global filtering processing on each image in the video obtained by de-interlacing, and the obtained processed picture is fuzzy, so that the processing effect is poor and the watching of the video is affected. Referring to fig. 2 together, fig. 2 is a schematic diagram illustrating a processing effect of an image processing method. As shown in fig. 2, the left-hand image of fig. 2 is a clear image, the image that has not undergone the global low-pass filtering process, the right-hand image of fig. 2 is an image that has undergone the global low-pass filtering process, and the fonts of letters at the edges of letters "S" become blurred, as in the image areas at the white boxes in the left-hand and right-hand images of fig. 2, thereby affecting the viewing of the images.
In this embodiment of the present application, the electronic device (e.g., the first electronic device 100 or the second electronic device 200) may determine a target area that meets the saw-tooth condition in the image to be processed, and determine a region to be processed corresponding to the target area, and then the electronic device (e.g., the first electronic device 100 or the second electronic device 200) may perform low-pass filtering processing on the region to be processed, to obtain a processed area corresponding to the region to be processed, and determine a processed image corresponding to the image to be processed according to the processed area. It can be seen that, the electronic device (e.g., the first electronic device 100 or the second electronic device 200) may detect the image area satisfying the saw-tooth condition in the image to be processed, and then process only the image area satisfying the saw-tooth condition, so as to ensure that the image in other areas is clearer, and the image area satisfying the saw-tooth condition is processed, thereby improving the overall processing effect of the image to be processed.
Based on the above-mentioned image processing system, the embodiment of the present application provides an image processing method, and the transmission method of the channel in the embodiment of the present application may be implemented by the first electronic device 100 in the image processing system shown in fig. 1, or may be implemented by a chip in the first electronic device 100, or may be implemented by the second electronic device 200 in the image processing system shown in fig. 1, or may be implemented by a chip in the second electronic device 200. Referring to fig. 3, fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application. The image processing method may include the following steps S301 to S304, where in this embodiment, S may represent Step (Step):
s301, determining a target area meeting the saw-tooth condition in the image to be processed.
In this embodiment of the present application, the image to be processed is an image frame of a video obtained by deinterlacing an interlaced video, where the image to be processed includes a target area that meets a jaggy condition, and the jaggy condition may be an image area in which a display such as a "cross-grain jaggy line" exists in the image to be processed. The target area is an image area meeting the jaggy condition, namely an image area displaying cross-grain jaggy lines in the image to be processed. The electronic equipment can process the to-be-processed image meeting the saw-tooth condition in the to-be-processed image, and the influence of displaying the image like a cross-grain saw-tooth line is reduced, so that the definition of the image is ensured.
The electronic device includes four registers, where the four registers store four preset parameters, such as an algorithm parameter En, an action range parameter N and Ver, and a threshold parameter th2, respectively. Among the four parameters, the algorithm parameter En may refer to a parameter of an algorithm type for detecting a target area satisfying a jaggy condition in an image to be processed, and it should be noted that different algorithms have different emphasis points, and the electronic device may determine the algorithm parameter according to the image to be processed, so as to determine the algorithm for detecting the target area. The range parameter N (first preset parameter) is used to determine the lateral processing range of the range, which corresponds to the left-right extension range. The action range parameter Ver (second preset parameter) is used to determine a longitudinal processing range of the action range, and corresponds to an up-down extension range, and may be, for example, a pixel point adjacent above and below a certain pixel point of the target area. And the threshold parameter th2 is used for judging according to the pixel value corresponding to each pixel point and the threshold parameter th2 when the target area meeting the saw-tooth condition in the image to be processed is determined, and the threshold parameter th2 is equivalent to a critical value for judging the cross-grain saw-tooth line. It should be noted that, the four preset parameters stored in the four registers are parameters automatically acquired by the electronic device according to the image to be processed obtained by de-interlacing, and are known parameters.
The electronic device may also receive an external input signal (motion) for indicating a mode of the electronic device, which may be, for example, a high motion or a medium motion or a low motion. The high motion, the medium motion and the low motion are respectively used for indicating a mode of the image to be processed, the modes are different, the corresponding parameters i are also different, and therefore the determined areas to be processed may be different. It should be noted that, the external input signal (motion) is an externally input signal received by the electronic device, and is merely indicative of a mode, where the mode is equivalent to an emphasis point of image processing, or is a mode detected by the electronic device according to an image to be processed, and thus may be used to determine whether a target area satisfying a saw-tooth condition is included or determine a region to be processed.
In a possible implementation manner, under the condition that an algorithm parameter en=1, the electronic device determines a target area meeting a saw-tooth condition in an image to be processed, where the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; and determining the region formed by the first pixel point, the second pixel point, the third pixel point and the fourth pixel point as a target region under the condition that the electronic equipment determines that the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value and the difference between the third pixel value and the fourth pixel value are larger than the target threshold value. Specifically, the first pixel point, the second pixel point, the third pixel point and the fourth pixel point are respectively adjacent pixel points in the same column, and the four pixel points can be windows for detecting whether the sawtooth image is satisfied in the image to be processed.
Under the condition that the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value and the difference between the third pixel value and the fourth pixel value are all larger than the target threshold, the area formed by the four pixel points can be determined as the target area, namely the saw-tooth condition is met. Referring to fig. 4 together, fig. 4 is an image schematic diagram of determining a target area and a to-be-processed area in an image to be processed provided in the embodiment of the present application, and as shown in fig. 4, an area for detecting whether a jaggy condition is satisfied may be an area including a pixel point a, a pixel point B, a pixel point C, and a pixel point D, where the column of pixel points further includes a pixel point Y, a pixel point Z, and a pixel point E. Taking the pixel point a, the pixel point B, the pixel point C, and the pixel point D as the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point as examples, the pixel values of the pixel point a, the pixel point B, the pixel point C, and the pixel point D, that is, the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value are denoted by A, B, C, D. Then, as shown in fig. 4, when en=1, in the case of (ase:Sub>A-B > th 2) & (C-D > th 2) | (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1, it is determined that the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D are target areas satisfying the jaggy condition, and then, the image arease:Sub>A has ase:Sub>A cross-striped jaggy line, as in the dark gray square arease:Sub>A in fig. 4.
In another possible implementation manner, the determining the target area satisfying the jaggy condition in the image to be processed may be determined together with the external input signal (motion) according to the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value corresponding to the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point. Specifically, the electronic device may receive an external input signal indicating the first mode, and determine that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is a target area when a difference between the first pixel value and the second pixel value, a difference between the second pixel value and the third pixel value, and a difference between the third pixel value and the fourth pixel value are all greater than a target threshold. The first mode may be a mode in which the external input signal (motion) is a high motion or a medium motion, which may be determined according to the value of En. For example, when en=2, an external input signal indicating a first mode, i.e., high motion or medium motion, is included in addition to the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value being greater than the target threshold.
As shown in fig. 4, the pixel values of the pixel point a, the pixel point B, the pixel point C, and the pixel point D, that is, the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value, are still represented by A, B, C, D taking the pixel point a, the pixel point B, the pixel point C, and the pixel point D as the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point. When en=2, it is detected whether the target arease:Sub>A satisfying the jaggy condition is an arease:Sub>A including the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D, and it may be that the arease:Sub>A constituted by the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D is determined to be the target arease:Sub>A in addition to satisfying the above condition (ase:Sub>A-B > th 2) & (C-D > th 2) | (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1.
In still another possible implementation manner, when determining that the target area satisfies the saw-tooth condition, if it is determined that the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold, it may be further determined that the external input signal (motion) is in a determined mode, and if the external input signal (motion) is in a high motion, it may be determined that the area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area. As shown in fig. 4, the pixel values of the pixel point a, the pixel point B, the pixel point C, and the pixel point D, that is, the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value, are still represented by A, B, C, D taking the pixel point a, the pixel point B, the pixel point C, and the pixel point D as the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point. As shown in fig. 4, when en=3, the target arease:Sub>A for detecting whether the saw-tooth condition is satisfied may be an arease:Sub>A where an external input signal (motion) needs to be satisfied as ase:Sub>A high motion in addition to the condition (ase:Sub>A-B > th 2) & (C-D > th 2) & (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1 described above, and at this time, the arease:Sub>A constituted by the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D is determined as the target arease:Sub>A.
It should be noted that the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point may be four adjacent pixel points in the same column in the image to be processed. As shown in fig. 4, the pixel a, the pixel B, the pixel C, and the pixel D are four adjacent pixels in the same column.
In yet another possible implementation manner, when determining the target area satisfying the jaggy condition in the image to be processed, the target area may be determined according to pixel values of the fifth pixel point, the sixth pixel point, the seventh pixel point, and the eighth pixel point in addition to the pixel values of the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point. Specifically, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value and the difference between the third pixel value and the fourth pixel value are all larger than a target threshold value; and determining the region formed by the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point as a target region when the difference between the fifth pixel value and the sixth pixel value, the difference between the sixth pixel value and the seventh pixel value and the difference between the seventh pixel value and the eighth pixel value are all larger than the target threshold value.
When en=4, the electronic device may use the first pixel, the second pixel, the third pixel, the fourth pixel, the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel as windows of the target area for determining the aliasing condition, and determine whether the first pixel, the second pixel, the third pixel, and the fourth pixel are met or not and determine whether the first pixel, the second pixel, the third pixel, the fourth pixel, and the eighth pixel are target areas satisfying the aliasing condition according to the determination results of the first pixel, the second pixel, the third pixel, the fourth pixel, the fifth pixel, the sixth pixel, and the eighth pixel, respectively. The fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixel points in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point. The columns where the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are located may be adjacent to the first pixel point, the second pixel point, the third pixel point and the fourth pixel point, for example, may be adjacent to the left side, may be adjacent to the right side, or may be adjacent to both the left and right sides.
Referring to fig. 5, fig. 5 is another image schematic diagram of determining a target area and a to-be-processed area in An image to be processed provided in the embodiment of the present application, and as shown in fig. 5, an area for detecting whether a saw-tooth condition is satisfied may be An area including a pixel point a, a pixel point B, a pixel point C, and a pixel point D, an area including a pixel point An, a pixel point Bn, a pixel point Cn, and a pixel point Dn, and/or An area including a pixel point Ap, a pixel point Bp, a pixel point Cp, and a pixel point Dp. Taking the pixel point a, the pixel point B, the pixel point C and the pixel point D as the first pixel point, the second pixel point, the third pixel point and the fourth pixel point, the pixel point An, the pixel point Bn, the pixel point Cn and the pixel point Dn, and/or the pixel point Ap, the pixel point Bp, the pixel point Cp and the pixel point Dp as the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point, for example, the pixel values of the pixel point a, the pixel point B, the pixel point C and the pixel point D are represented by A, B, C, D, the pixel values of the pixel point An, the pixel point Bn, the pixel point Cn and the pixel point Dn are represented by An, bn, cn, dn, and the pixel values of the pixel point Ap, the pixel point Bp, the pixel point Cp and the pixel point Dp are represented by Ap, bp, cp, dp. Then, as shown in fig. 5, when en=4, in the case where (ase:Sub>A-B > th 2) & (C-D > th 2) | (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1, and (Ap-Bp > th 2) & (Cp-Dp > th 2) | (Bp-Ap > th 2) & (Bp-Cp > th 2) & (Dp-Cp > th 2) & (Cn-Bn > th 2) | (Bn-An > th 2) & (Bn-Cn > th 2) & (Dn-Cn > th 2) & 1), the pixel point ase:Sub>A, the pixel point C, the pixel point D, the pixel point Cn, the pixel point Dn > th 2) & the target pixel point p, the pixel point p, and the pixel point p are determined to satisfy the condition.
Optionally, referring to fig. 6 together, fig. 6 is a schematic diagram of another image for determining a target area and a to-be-processed area in an image to be processed provided in the embodiment of the present application, and as shown in fig. 6, an area for detecting whether the jaggy condition is satisfied may be an area including a pixel point a, a pixel point B, a pixel point C, and a pixel point D, where the column of pixel points further includes a pixel point Y, a pixel point Z, and a pixel point E. Taking the pixel point a, the pixel point B, the pixel point C, and the pixel point D as the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point as examples, the pixel values of the pixel point a, the pixel point B, the pixel point C, and the pixel point D, that is, the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value are denoted by A, B, C, D. Then, as shown in fig. 5, when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold, and an external input signal indicating the first mode is received, if the motion is a high motion or a medium motion, the region formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is determined to be the target region. That is, when en=5, (ase:Sub>A-B > th 2) & (C-D > th 2) | (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1, if the external input signal (motion) is ase:Sub>A high motion or ase:Sub>A medium motion, it is determined that the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D are target areas satisfying the jaggy condition, as in the dark gray square arease:Sub>A of fig. 5.
Similarly, referring to fig. 7, fig. 7 is a schematic diagram of another image for determining a target area and a region to be processed in an image to be processed according to the embodiment of the present application, in which the pixel point a, the pixel point B, the pixel point C, and the pixel point D are taken as the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point, and A, B, C, D is used to represent the pixel values of the pixel point a, the pixel point B, the pixel point C, and the pixel point D, that is, the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value. When en=6, if (ase:Sub>A-B > th 2) & (C-D > th 2) | (B-ase:Sub>A > th 2) & (B-C > th 2) & (D-C > th 2) =1, the region constituted by the pixel point ase:Sub>A, the pixel point B, the pixel point C, and the pixel point D is determined as the target region.
It should be noted that, in the image to be processed, it is seemingly that en=5 and en=6, the target area satisfying the saw-tooth condition is determined, and there is no difference between the target area satisfying the saw-tooth condition and the target area in the image to be processed when en=2, but when en=5 and when en=6, other preset parameters are not completely the same, and further the area to be processed determined according to the target area is also different, and in the processing, the image area which may be subjected to the low-pass filtering processing when en=5 and en=6 and when en=2 are not the same, only the manner of determining whether the target area of the cross-grain saw-tooth line exists is the same. Further, the electronic device may determine, according to the preset parameter, a region to be processed corresponding to the target region.
S302, determining a to-be-processed area corresponding to the target area in the to-be-processed image.
In this embodiment of the present application, the area to be processed is an area needing low-pass filtering processing, and it can be understood that if the low-pass filtering processing is performed on the image to be processed, the definition of the whole image may be reduced, and in this application, the low-pass filtering processing may be performed on a part of the area, that is, the area to be processed corresponding to the target area is determined, and the definition may be ensured, and meanwhile, the image area including the cross-stripe zigzag line is processed, thereby improving the processing effect and the display effect.
In one possible implementation manner, when en=1, the value of the preset parameter may be n=4, i=1, and ver=0, as shown in the left side of fig. 4, in a case where it is determined that the target area satisfying the jaggy condition in the image to be processed is an image area formed by pixel a, pixel B, pixel C, and pixel D, N/i pixel points adjacent to the left and right of pixel B may be determined as the area to be processed (including pixel B, that is, 2N/i+1 pixel points in the row where pixel B is located are the area to be processed), and the pixel point subjected to the low-pass filtering may be a light gray square as shown in the left side of fig. 4. When en=1, the value of the preset parameter may also be n=4, i=1, ver=1, and as shown in the right side of fig. 4, in the case of determining that the target area satisfying the jaggy condition in the image to be processed is the image area formed by the pixel point a, the pixel point B, the pixel point C, and the pixel point D, N/i pixels adjacent to the left and right of the pixel point B may be determined as the first area to be processed, that is, 2N/i+1 pixels in the row where the pixel point B is located are determined as the first area to be processed, and 3 rows of pixels adjacent above the pixel point B and 2 rows of pixels adjacent below the pixel point B are determined as the second area to be processed, and then the first area to be processed and the second area to be processed are determined as the areas to be processed corresponding to the target area.
In another possible implementation manner, when en=2, the value of the preset parameter may be n=4, i=1, and ver=0, and as shown in the left side of fig. 4, N/i pixels adjacent to each other on the left and right sides of the pixel B may be determined as the area to be processed (including the pixel B, that is, 2N/i+1 pixels in the row where the pixel B is located are the area to be processed), and the pixel subjected to the low-pass filtering may still be a light gray square as shown in the left side of fig. 4. When en=2, the values of the preset parameters may be n=4, i=1, and ver=1, and then, as shown in the right side of fig. 4, N/i pixels adjacent to the left and right of the pixel B may be determined as a first area to be processed, that is, 2N/i+1 pixels in the row where the pixel B is located are determined as a first area to be processed, 3 rows of pixels adjacent above the pixel B and 2 rows of pixels adjacent below the pixel B are determined as a second area to be processed, and then, the first area to be processed and the second area to be processed are determined as areas to be processed corresponding to the target area, as shown in light gray squares on the right side of fig. 4.
In yet another possible implementation manner, when en=3, the value of the preset parameter may be n=4, i=1, and ver=0, and as shown in the left side of fig. 4, the electronic device may set 2N/i+1 pixels of the row where the pixel point B (including the pixel point B, that is, including the target area) is located as the area to be processed, as shown in the light gray square on the left side of fig. 4. When en=3, the values of the preset parameters may be n=4, i=1, and ver=1, where 2N/i+1 pixel points in the row where the pixel point B is located are the first to-be-processed area, and 3 rows of adjacent pixel points above the pixel point B and 2 rows of adjacent pixel points below the pixel point B are determined as the second to-be-processed area, and then the first to-be-processed area and the second to-be-processed area are determined as the to-be-processed areas corresponding to the target area, as shown by the light gray squares on the right side of fig. 4.
It should be noted that, when en=1, en=2, and en=3, the manner of determining the target area satisfying the jaggy condition in the image to be processed is different, but the electronic device determines the parameters corresponding to the first mode according to the first preset parameters, such as N, the second preset parameters, such as Ver, and the correspondence between the input mode and the parameters, such as i, and the determined area of the image to be processed is the same, as shown in the area to be processed determined in fig. 4. It can be understood that when preset parameters in a register in the electronic device are the same, the determined target area may correspond to the same area to be processed.
In yet another possible implementation manner, when en=4, the value of the preset parameter may be n=4, i=1, and ver=0, as shown in the left side of fig. 5, the electronic device may set 2N/i+1 pixels of the row where the pixel point B, the pixel point Bp, and the pixel point Bn (including the pixel point B, the pixel point Bp, and the pixel point Bn, that is, including the target area) are located as the area to be processed, as shown in the light gray square on the left side of fig. 5. When en=4, the values of the preset parameters may be n=4, i=1, ver=1, and the pixel point B, the pixel point Bp, and the 2n/i+1 pixel points in the row where the pixel point Bn is located are taken as the first to-be-processed area, and the pixel point B, the pixel point Bp, 3 rows of pixel points above and 2 rows of pixel points below and adjacent to the pixel point Bn are determined as the second to-be-processed area, and then the first to-be-processed area and the second to-be-processed area are determined as the to-be-processed areas corresponding to the target area, as shown by the light gray square on the right side of fig. 5.
In yet another possible implementation manner, when en=5, the value of the preset parameter may be n=4, i=2, ver=0, as shown in the left side of fig. 6, the electronic device may set 2N/i+3 pixels of the row where the pixel point B (including the pixel point B, that is, including the target area) is located as the area to be processed, that is, N/i+1 pixels to the left side of the pixel point B and N/i+1 pixels to the right side of the pixel point B in fig. 6, and the pixel point B itself. When en=5, the values of the preset parameters may be n=4, i=2, and ver=1, where 2n/i+3 pixels in the row where the pixel point B is located are the first to-be-processed area, and 3 rows of adjacent pixels above the pixel point B and 2 rows of adjacent pixels below the pixel point B are determined as the second to-be-processed area, and then the first to-be-processed area and the second to-be-processed area are determined as the to-be-processed areas corresponding to the target area, as shown in the light gray squares on the right side of fig. 6.
In yet another possible implementation, when en=6, the value of the preset parameter may be n=4, i=4, and ver=0, as shown in the left side of fig. 7, the electronic device may set 2N/i+3 pixels of the row where the pixel point B (including the pixel point B, that is, including the target area) is located as the area to be processed, that is, as shown in a light gray square on the left side of fig. 7. When en=6, the values of the preset parameters may be n=4, i=4, and ver=1, where 2n/i+3 pixels in the row where the pixel point B is located are the first to-be-processed area, and 3 rows of adjacent pixels above the pixel point B and 2 rows of adjacent pixels below the pixel point B are determined as the second to-be-processed area, and then the first to-be-processed area and the second to-be-processed area are determined as the to-be-processed areas corresponding to the target area, as shown in the light gray squares on the right side of fig. 7.
After determining the to-be-processed area corresponding to the target area in the to-be-processed image, the electronic device may perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area.
S303, performing low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area.
In the embodiment of the present application, the low-pass filtering process is a process for removing high-frequency components in an image, and may be used for smoothing an image area, for example, may be used for smoothing an image area of the above-mentioned cross-stripe saw-tooth line. The processed region corresponding to the region to be processed may refer to a processed region obtained by low-pass filtering the region to be processed in the image to be processed, where the region formed by each pixel point after updating the pixel value is the processed region corresponding to the region to be processed if the pixel value of each pixel point in the region to be processed is changed.
In one possible implementation manner, the electronic device may process the pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter to obtain a processed area, where the second pixel value of the second pixel point in the processed area is changed. In this embodiment of the present application, the pixel parameter may be a calculated parameter, where the pixel parameter is a parameter determined according to a first pixel value, a second pixel value, a third pixel value, and a fourth pixel value, and the parameter is a certain parameter of a low-pass filtering process, and it can be understood that the parameter in the low-pass filter provided in this embodiment of the present application is determined according to a pixel value of an adjacent pixel point in the same column.
Referring to fig. 8 together, fig. 8 is a schematic diagram of a low-pass filtered image provided in the embodiment of the present application, as shown in fig. 8, x, y, z, a, b, c, d, e, f are all pixel points, which may be pixel points in the same column, wherein xy represents a subtraction absolute value of a pixel value of the pixel point x and a pixel value of the pixel point y, xz represents a subtraction absolute value of a pixel value of the pixel point x and a pixel value of the pixel point z, and so on, and in fig. 8, the combination of the pixel points is an absolute value of a difference of the pixel values of the two pixel points, as shown in fig. 8, xz, za, ab, bc, cd, de, ef, xz, ya, zb, ac, bd, ce, and df.
Further, referring to FIG. 9 together, FIG. 9 is a schematic diagram of another image of a low-pass filtering according to the embodiment of the present application, wherein FIG. 9 is a flow of determining pixel parameters of the low-pass filtering, and FIG. 9 is a flow of determining pixel parameters of the low-pass filtering, for example, alpha 1-6. For example, the value of alpha1 may be comb_mult_p1+512= (comb_diff_1-Threshold) ×slope+512= (xy+yz+za+xz+ya-Threshold) ×slope+512.
The value of alpha2 may be comb_mult_p2+512= (comb_diff_2-Threshold) ×slope+512= (yz+za+ab+ya+zb-Threshold) ×slope+512.
The value of alpha3 may be comb_mult_p3+512= (comb_diff_3-Threshold) ×slope+512= (za+ab+bc+zb+ac-Threshold) ×slope+512.
The value of alpha4 may be comb_mult_p4+512= (comb_diff_4-Threshold) ×slope+512= (ab+bc+cd+ac+bd-Threshold) ×slope+512.
The value of alpha5 may be comb_mult_p4+512= (comb_diff_5-Threshold) ×slope+512= (bc+cd+de+bd+ce-Threshold) ×slope+512.
The value of alpha6 may be comb_mult_p4+512= (comb_diff_6-Threshold) ×slope+512= (cd+de+ef+ce+df-Threshold) ×slope+512.
Wherein, the Threshold and Slope are both fixed values of the register. Taking the example of y pixels, the result of pixel y passing through the low-pass wave device is shown in formula 1:
Figure BDA0004024154970000141
the result of passing the pixel z through the low-pass wave device is shown in formula 2:
Figure BDA0004024154970000142
the result of passing pixel a through the low pass wave device is shown in equation 3:
Figure BDA0004024154970000143
the result of passing pixel b through the low pass wave device is shown in equation 4:
Figure BDA0004024154970000144
the result of passing the pixel c through the low-pass wave device is shown in formula 5:
Figure BDA0004024154970000145
the result of passing the pixel d through the low pass wave device is shown in formula 6:
Figure BDA0004024154970000146
the result of passing pixel e through the low pass filter is shown in equation 7:
Figure BDA0004024154970000147
alternatively, for the pixel point y and the pixel point e, the pixel parameter alpha may be a fixed value, for example, the pixel parameter corresponding to the pixel point y is alpha1, and the pixel parameter corresponding to the pixel point e is alpha6. For the pixel point Z, ase:Sub>A formulase:Sub>A of the pixel point can be flexibly determined, for example, in ase:Sub>A row of pixel points x, Y, Z, ase:Sub>A, B, c, d, e, f in fig. 9, the pixel point Z and the pixel point ase:Sub>A are taken as an example for explanation, the electronic device can determine whether the pixel point Z satisfies (Y-Z > th 2) & (ase:Sub>A-B > th 2) & lt (Z-Y > th 2) & (Z-ase:Sub>A > th 2) & (B-ase:Sub>A > th 2) & gt, if yes, the pixel parameter corresponding to the pixel point Z is alphase:Sub>A 2, and if not, the pixel parameter corresponding to the pixel point Z is alphase:Sub>A 1. Similarly, for the pixel point ase:Sub>A, the electronic device may determine whether the pixel point ase:Sub>A satisfies (Z-ase:Sub>A > th 2) & (B-C > th 2) | (ase:Sub>A-Z > th 2) & (ase:Sub>A-B > th 2) & (C-B > th 2) =1, and if so, the pixel parameter corresponding to the pixel point ase:Sub>A is alphase:Sub>A 3, and if not, the pixel parameter corresponding to the pixel point ase:Sub>A is alphase:Sub>A 2. It will be appreciated that, for example, the pixel point a may determine the value of the pixel parameter alpha according to the trend that the pixel values of the four pixels of the pixel point z, the pixel point b and the pixel point c adjacent to the pixel point a conform to the "high-low" or "low-high" so as to obtain the result after the low-pass filter processing.
S304, determining the processed image corresponding to the image to be processed according to the processed area.
In this embodiment of the present application, the processed image is an image obtained after processing the image, and may be an image obtained after updating pixel values for the same pixel points, for example, a processed image corresponding to the image to be processed, that is, in the image to be processed, after updating pixel values of pixel points of a part of image areas, the obtained image is the processed image corresponding to the image to be processed. The pixel points of the partial image area may be the pixel points of the to-be-processed area, and the original pixel value is updated to the pixel value after the low-pass filtering processing.
In one possible implementation manner, the electronic device may process the area to be processed to obtain a processed area, where the processed area is a pixel value obtained by low-pass filtering a pixel value of the pixel point in the area to be processed. And further, the pixel value of the unprocessed pixel point and the pixel value of the updated pixel point are determined to be the processed image corresponding to the image to be processed, namely, in the processed image, the pixel value of the pixel point displaying the cross-stripe saw-tooth line is updated, so that the influence of the cross-stripe saw-tooth line on the image display can be reduced under the condition of ensuring the definition of the image.
Referring to fig. 10 together, fig. 10 is a schematic view illustrating a processing effect of an image processing method according to an embodiment of the present disclosure. As shown in fig. 10, in which the left side of fig. 10 is an image to be processed and the right side of fig. 10 is an image after processing, there is a target area satisfying the jaggy condition in the left side of fig. 10, and after the area to be processed corresponding to the target area has been subjected to the low-pass filtering processing in the right side of fig. 10, it can be seen that the image area satisfying the jaggy condition is smoothed while the sharpness of other areas is unchanged, so that the sharpness of the overall processing is unchanged, for example, the sharpness of the edge area of the letter "S" in fig. 10 is unchanged in comparison with the white circle portion of the letter "S" shown in the right side of fig. 2.
In the embodiment of the application, the target area meeting the saw-tooth condition is determined in the image to be processed, so that the area to be processed corresponding to the target area can be determined, the area to be processed can be subjected to low-pass filtering processing after the area to be processed is determined, the processed area corresponding to the area to be processed is obtained, and finally, the processed image corresponding to the image to be processed is determined according to the processed area. Therefore, after the image area meeting the saw-tooth condition is detected in the image to be processed, only the image area meeting the saw-tooth condition is processed, so that the images in other areas can be ensured to be clearer, the image area meeting the saw-tooth condition is processed, and the overall processing effect of the image to be processed is improved.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 11, the image processing apparatus 1100 includes a determination unit 1101 and a processing unit 1102. The image processing apparatus 1100 may perform the relevant steps of the terminal device in the foregoing method embodiment. Wherein:
a determining unit 1101 for determining, in an image to be processed, a target area satisfying a jaggy condition;
the determining unit 1101 is further configured to determine, in the image to be processed, a region to be processed corresponding to the target region;
a processing unit 1102, configured to perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area;
the determining unit 1101 is further configured to determine a processed image corresponding to the image to be processed according to the processed area.
In one possible implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the determining unit 1101 is configured to determine, in an image to be processed, a target area that satisfies a jaggy condition, and is specifically configured to:
When the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region.
In a possible implementation manner, the determining unit 1101 is configured to determine, in an image to be processed, a target area that meets a jaggy condition, and is specifically configured to:
and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area.
In one possible implementation manner, the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point are four adjacent pixel points in the same column in the image to be processed.
In one possible implementation manner, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the above-mentioned determination unit 1101 is also configured to: the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value;
and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold;
The fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
In a possible implementation manner, the determining unit 1101 is configured to determine, in the image to be processed, a region to be processed corresponding to the target region, specifically configured to:
acquiring a first preset parameter and a second preset parameter;
determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters;
determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode;
acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information;
and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
In a possible implementation manner, the processing unit 1102 is configured to perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area, where the processing unit is specifically configured to:
processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter;
wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
In particular, in this case, the operations performed by the determining unit 1101 and the processing unit 1102 may be described with reference to the electronic devices in the embodiments corresponding to fig. 3 to 10 described above.
The image processing apparatus 1100 may also be used to implement other functions of the electronic device in the corresponding embodiments of fig. 3 to 10, which are not described herein. Based on the same inventive concept, the principle and beneficial effects of the image processing apparatus 1100 provided in the embodiments of the present application for solving the problem are similar to those of the electronic device in the embodiments of the method of the present application, and may refer to the principle and beneficial effects of implementation of the method, which are not described herein for brevity.
According to the embodiment of the present application, each unit in the image processing apparatus shown in fig. 11 may be configured by combining each unit into one or several other units, or some unit(s) thereof may be configured by splitting into a plurality of units having smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the image processing apparatus 1100 may also include other units, and in practical applications, these functions may also be implemented with assistance of other units, and may be implemented by cooperation of multiple units.
The image processing apparatus may be, for example: a chip, or a chip module. With respect to each apparatus and each module included in the product described in the above embodiments, it may be a software module, or may be a hardware module, or may be a software module partially, or may be a hardware module partially. For example, for each device or product applied to or integrated in a chip, each module included in the device or product may be implemented in hardware such as a circuit, or at least some modules may be implemented in software program, where the software program runs on a processor integrated in the chip, and the remaining (if any) some modules may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module contained in the device and product can be realized in a hardware mode such as a circuit, different modules can be located in the same component (such as a chip and a circuit module) of the chip module or in different components, or at least part of the modules can be realized in a software program, the software program runs in a processor integrated in the chip module, and the rest (if any) of the modules can be realized in a hardware mode such as a circuit; for each device and product applied to or integrated in the terminal, each module included in the device and product may be implemented by hardware such as a circuit, and different modules may be located in the same component (for example, a chip, a circuit module, etc.) or different components in the electronic device, or at least part of the modules may be implemented by software programs running on a processor integrated in the terminal, and the rest (if any) of the modules may be implemented by hardware such as a circuit.
In the embodiment of the application, the target area meeting the saw-tooth condition is determined in the image to be processed, so that the area to be processed corresponding to the target area can be determined, the area to be processed can be subjected to low-pass filtering processing after the area to be processed is determined, the processed area corresponding to the area to be processed is obtained, and finally, the processed image corresponding to the image to be processed is determined according to the processed area. Therefore, after the image area meeting the saw-tooth condition is detected in the image to be processed, only the image area meeting the saw-tooth condition is processed, so that the images in other areas can be ensured to be clearer, the image area meeting the saw-tooth condition is processed, and the overall processing effect of the image to be processed is improved.
Referring to fig. 12, fig. 12 is another image processing apparatus according to an embodiment of the present application. Can be used to implement the functions of the image processing apparatus in the above-described method embodiments. The image processing apparatus 1200 may include a processor 1201 and a transceiver 1202. Optionally, the image processing apparatus 1200 may further include a memory 1203. Wherein the processor 1201, the transceiver 1202, and the memory 1203 may be connected by a bus 1204 or otherwise. The bus is shown in bold lines in fig. 12, and the manner in which other components are connected is merely illustrative and not limiting. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 12, but not only one bus or one type of bus.
The coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units, or modules, which may be in electrical, mechanical, or other forms for information interaction between the devices, units, or modules. The specific connection medium between the processor 1201 and the memory 1203 is not limited in the embodiments of the present application.
The memory 1203 may include read only memory and random access memory and provide instructions and data to the processor 1201. A portion of memory 1203 may also include nonvolatile random access memory.
The processor 1201 may be a central processing unit (Central Processing Unit, CPU), and the processor 1201 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor, but in the alternative, the processor 1201 may be any conventional processor or the like. Wherein: the memory 1203 is used for storing program instructions.
A processor 1201 for invoking program instructions stored in the memory 1203 for performing the steps of:
determining a target area meeting a saw-tooth condition in an image to be processed;
determining a to-be-processed area corresponding to the target area in the to-be-processed image;
carrying out low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated;
and determining the processed image corresponding to the image to be processed according to the processed area.
In one possible implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the processor 1201 is configured to invoke program instructions stored in the memory 1203 to determine, in an image to be processed, a target area satisfying a jaggy condition, and specifically to perform the following steps:
when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region.
In a possible implementation manner, the processor 1201 is configured to invoke the program instructions stored in the memory 1203 to determine, in the image to be processed, a target area that meets the jaggy condition, and specifically is configured to perform the following steps:
and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area.
In one possible implementation manner, the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point are four adjacent pixel points in the same column in the image to be processed.
In one possible implementation manner, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the processor 1201 is configured to invoke the program instructions stored in the memory 1203 and further configured to perform the following steps:
The difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value;
and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold;
the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
In a possible implementation manner, the processor 1201 is configured to invoke the program instructions stored in the memory 1203 to determine, in the image to be processed, a region to be processed corresponding to the target region, and specifically configured to perform the following steps:
acquiring a first preset parameter and a second preset parameter;
determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters;
determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode;
acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information;
and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
In a possible implementation manner, the processor 1201 is configured to call a program instruction stored in the memory 1203 to perform a low-pass filtering process on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area, and specifically is configured to perform the following steps:
processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter;
Wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
In the present embodiment, the method provided by the present embodiment may be implemented by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 3 to 10 on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), etc., and a processing element such as a storage element. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and run in the above-described computing device through the computer-readable recording medium.
Based on the same inventive concept, the principle and beneficial effects of the image processing device for solving the problems provided in the embodiments of the present application are similar to those of the image processing device for solving the problems in the embodiments of the method of the present application, and may refer to the principle and beneficial effects of implementation of the method, and are not repeated herein for brevity.
The image processing apparatus (e.g., the image processing apparatus 1100, the image processing apparatus 1200) may be, for example: a chip, or a chip module.
The embodiment of the application also provides a chip, which can execute the relevant steps of the terminal equipment in the embodiment of the method.
The chip is used for executing the following steps: determining a target area meeting a saw-tooth condition in an image to be processed; determining a to-be-processed area corresponding to the target area in the to-be-processed image; carrying out low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated; and determining the processed image corresponding to the image to be processed according to the processed area.
In one possible implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the chip is used for determining a target area meeting the saw-tooth condition in an image to be processed, and is particularly used for executing the following steps: when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region.
In a possible implementation manner, the chip is configured to determine, in the image to be processed, a target area that meets the jaggy condition, and is specifically configured to perform the following steps: and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area.
In one possible implementation manner, the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point are four adjacent pixel points in the same column in the image to be processed.
In one possible implementation manner, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the chip is also used for executing the following steps: the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value; and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold; the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
In a possible implementation manner, the chip is configured to determine, in the image to be processed, a region to be processed corresponding to the target region, and specifically configured to perform the following steps: acquiring a first preset parameter and a second preset parameter; determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters; determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode; acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information; and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
In a possible implementation manner, the chip is configured to perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area, and specifically is configured to perform the following steps: processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter; wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
In particular, in this case, the operations performed by the chip may be described with reference to the electronic device in the embodiments corresponding to fig. 3 to 10.
In one possible implementation, the chip includes at least one processor, at least one first memory, and at least one second memory; the at least one first memory and the at least one processor are interconnected through a circuit, a computer program is stored in the first memory, the computer program comprises program instructions, and the processor is used for executing the program instructions to realize the image processing method in the embodiment of the application; the at least one second memory and the at least one processor are interconnected by a line, where the second memory stores data to be stored in the embodiment of the method.
For each device and product applied to or integrated in the chip, each module contained in the device and product can be realized in a hardware mode such as a circuit, or at least part of the modules can be realized in a software program, the software program runs on a processor integrated in the chip, and the rest (if any) of the modules can be realized in a hardware mode such as a circuit.
Based on the same inventive concept, the principle and beneficial effects of solving the problem of the chip provided in the embodiments of the present application are similar to those of solving the problem of the electronic device in the embodiments of the method of the present application, and can be referred to the principle and beneficial effects of implementing the method, which are not described herein for brevity.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a module device according to an embodiment of the present application. The module apparatus 1300 may perform the steps related to the electronic apparatus in the foregoing method embodiment, where the module apparatus 1300 includes: communication module 1301, power module 1302, memory module 1303 and chip module 1304.
The communication module 1301 is configured to perform internal communication of a module device, or perform communication between the module device and an external device; the power module 1302 is configured to provide power to the module device; the storage module 1303 is configured to store data and instructions; the chip module 1304 is configured to perform the following steps:
determining a target area meeting a saw-tooth condition in an image to be processed; determining a to-be-processed area corresponding to the target area in the to-be-processed image; carrying out low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated; and determining the processed image corresponding to the image to be processed according to the processed area.
In one possible implementation manner, the image to be processed includes a first pixel, a second pixel, a third pixel and a fourth pixel, where the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the chip module 1304 is configured to use the third pixel to be a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the above-mentioned determining, in the image to be processed, the target area satisfying the jaggy condition is specifically configured to execute the following steps:
when the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than a target threshold value, determining that a region constituted by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target region.
In one possible implementation manner, the chip module 1304 is configured to determine, in an image to be processed, a target area that meets a saw-tooth condition, and specifically is configured to perform the following steps:
and when an external input signal indicating a first mode is received, and the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point is the target area.
In one possible implementation manner, the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point are four adjacent pixel points in the same column in the image to be processed.
In one possible implementation manner, the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, where the fifth pixel is a pixel corresponding to the fifth pixel, the sixth pixel is a pixel corresponding to the sixth pixel, the seventh pixel is a pixel corresponding to the seventh pixel, and the eighth pixel is a pixel corresponding to the eighth pixel; the chip module 1304 is further configured to perform the following steps:
the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold value;
and determining that a region formed by the fifth pixel, the sixth pixel, the seventh pixel, and the eighth pixel is the target region when the difference between the fifth pixel and the sixth pixel, the difference between the sixth pixel and the seventh pixel, and the difference between the seventh pixel and the eighth pixel are greater than the target threshold;
The fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixels in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
In a possible implementation manner, the chip module 1304 is configured to determine, in the image to be processed, a region to be processed corresponding to the target region, and specifically configured to perform the following steps:
acquiring a first preset parameter and a second preset parameter;
determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters;
determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode;
acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information;
and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
In a possible implementation manner, the chip module 1304 is configured to perform low-pass filtering processing on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area, and specifically is configured to perform the following steps:
processing a pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter;
wherein the pixel parameter is determined based on the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
For each device and product applied to or integrated in the chip module, each module included in the device and product may be implemented by hardware such as a circuit, and different modules may be located in the same component (e.g. a chip, a circuit module, etc.) of the chip module or different components, or at least some modules may be implemented by using a software program, where the software program runs on a processor integrated in the chip module, and the remaining (if any) modules may be implemented by hardware such as a circuit.
In the embodiment of the application, the target area meeting the saw-tooth condition is determined in the image to be processed, so that the area to be processed corresponding to the target area can be determined, the area to be processed can be subjected to low-pass filtering processing after the area to be processed is determined, the processed area corresponding to the area to be processed is obtained, and finally, the processed image corresponding to the image to be processed is determined according to the processed area. Therefore, after the image area meeting the saw-tooth condition is detected in the image to be processed, only the image area meeting the saw-tooth condition is processed, so that the images in other areas can be ensured to be clearer, the image area meeting the saw-tooth condition is processed, and the overall processing effect of the image to be processed is improved.
The present application also provides a computer readable storage medium, in which a computer program is stored, the computer program comprising one or more program instructions adapted to be loaded by an electronic device and to perform the method provided by the above-mentioned method embodiments.
The present application also provides a computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the method provided by the method embodiments described above.
The embodiment of the present application also provides an image processing system, which may include the electronic devices (the first electronic device 100 and the second electronic device 200) in fig. 1.
With respect to each of the apparatuses and each of the modules/units included in the products described in the above embodiments, it may be a software module/unit, a hardware module/unit, or a software module/unit, and a hardware module/unit. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least part of the modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the rest (if any) of the modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal, each module/unit included in the device, product, or application may be implemented by using hardware such as a circuit, different modules/units may be located in the same component (for example, a chip, a circuit module, or the like) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program, where the software program runs on a processor integrated inside the terminal, and the remaining (if any) part of the modules/units may be implemented by using hardware such as a circuit.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the described order of action, as some steps may take other order or be performed simultaneously according to the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be implemented by program instructions and associated hardware, and that the program instructions may be stored in a computer-readable storage medium, which may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing disclosure is merely one embodiment of the present application and is merely a partial embodiment of the present application and is not intended to limit the scope of the claims.

Claims (12)

1. An image processing method, comprising:
determining a target area meeting a saw-tooth condition in an image to be processed;
determining a to-be-processed area corresponding to the target area in the to-be-processed image;
performing low-pass filtering treatment on the region to be treated to obtain a treated region corresponding to the region to be treated;
and determining the processed image corresponding to the image to be processed according to the processed area.
2. The method of claim 1, wherein the image to be processed comprises a first pixel, a second pixel, a third pixel and a fourth pixel, the first pixel is a pixel corresponding to the first pixel, the second pixel is a pixel corresponding to the second pixel, the third pixel is a pixel corresponding to the third pixel, and the fourth pixel is a pixel corresponding to the fourth pixel; the determining the target area meeting the saw-tooth condition in the image to be processed comprises the following steps:
and determining that a region formed by the first pixel point, the second pixel point, the third pixel point and the fourth pixel point is the target region under the condition that the difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value and the difference between the third pixel value and the fourth pixel value are all larger than a target threshold value.
3. The method according to claim 2, wherein determining a target area satisfying a jaggy condition in the image to be processed comprises:
and when an external input signal indicating a first mode is received, and the difference between a first pixel value and a second pixel value, the difference between the second pixel value and a third pixel value and the difference between the third pixel value and a fourth pixel value are all larger than the target threshold value, determining that an area formed by the first pixel point, the second pixel point, the third pixel point and the fourth pixel point is the target area.
4. A method according to claim 2 or 3, wherein the first, second, third and fourth pixels are four adjacent pixels of the same column in the image to be processed.
5. The method according to claim 2, wherein the image to be processed further includes a fifth pixel, a sixth pixel, a seventh pixel, and an eighth pixel, the fifth pixel being a pixel corresponding to a fifth pixel value, the sixth pixel being a pixel corresponding to a sixth pixel value, the seventh pixel being a pixel corresponding to a seventh pixel value, the eighth pixel being a pixel corresponding to an eighth pixel value; the method further comprises the steps of:
The difference between the first pixel value and the second pixel value, the difference between the second pixel value and the third pixel value, and the difference between the third pixel value and the fourth pixel value are all greater than the target threshold;
and determining the first pixel point, the second pixel point, the third pixel point, the fourth pixel point, and a region formed by the fifth pixel point, the sixth pixel point, the seventh pixel point, and the eighth pixel point as the target region when the difference between the fifth pixel value and the sixth pixel value, the difference between the sixth pixel value and the seventh pixel value, and the difference between the seventh pixel value and the eighth pixel value are all greater than the target threshold;
the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point are four adjacent pixel points in the same column in the image to be processed, the fifth pixel point is adjacent to the first pixel point, the sixth pixel point is adjacent to the second pixel point, the seventh pixel point is adjacent to the third pixel point, and the eighth pixel point is adjacent to the fourth pixel point.
6. A method according to claim 3, wherein determining, in the image to be processed, a region to be processed corresponding to the target region includes:
acquiring a first preset parameter and a second preset parameter;
determining parameters corresponding to the first mode according to the corresponding relation between the input mode and the parameters;
determining a first area to be processed according to the first preset parameter and the parameter corresponding to the first mode;
acquiring action range information corresponding to the second preset parameters, and determining a second area to be processed according to the action range information;
and determining the first to-be-processed area and the second to-be-processed area as to-be-processed areas corresponding to the target area, wherein the to-be-processed areas comprise the target area.
7. The method of claim 3, wherein the performing low-pass filtering on the to-be-processed area to obtain a processed area corresponding to the to-be-processed area includes:
processing the pixel point corresponding to the second pixel value according to the first pixel value, the second pixel value, the third pixel value and the pixel parameter;
wherein the pixel parameter is determined from the first pixel value, the second pixel value, the third pixel value, and the fourth pixel value.
8. An image processing apparatus, comprising:
a determining unit configured to determine, in an image to be processed, a target area satisfying a jaggy condition;
the determining unit is further configured to determine, in the image to be processed, a region to be processed corresponding to the target region;
the processing unit is used for carrying out low-pass filtering processing on the region to be processed to obtain a processed region corresponding to the region to be processed;
the determining unit is further configured to determine, according to the processed area, a processed image corresponding to the image to be processed.
9. An image processing apparatus, comprising a processor;
the processor being configured to perform the method of any of claims 1-7.
10. A chip comprising a memory in which a computer program is stored, the computer program comprising program instructions, and a processor for executing the program instructions to implement the method of any of claims 1-7.
11. A chip module, characterized in that the chip module comprises a communication interface and a chip, wherein: the communication interface is used for carrying out internal communication of the chip module or carrying out communication between the chip module and external equipment; the chip being adapted to perform the method of any one of claims 1-7.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed, cause the method of any of claims 1-7 to be performed.
CN202211727831.XA 2022-12-28 2022-12-28 Image processing method and device Pending CN116320204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211727831.XA CN116320204A (en) 2022-12-28 2022-12-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211727831.XA CN116320204A (en) 2022-12-28 2022-12-28 Image processing method and device

Publications (1)

Publication Number Publication Date
CN116320204A true CN116320204A (en) 2023-06-23

Family

ID=86821116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211727831.XA Pending CN116320204A (en) 2022-12-28 2022-12-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN116320204A (en)

Similar Documents

Publication Publication Date Title
US5519451A (en) Motion adaptive scan-rate conversion using directional edge interpolation
US5784115A (en) System and method for motion compensated de-interlacing of video frames
JP5039192B2 (en) Spatio-temporal adaptive video deinterlacing
US20060139494A1 (en) Method of temporal noise reduction in video sequences
KR20030007817A (en) Scalable resolution enhancement of a video image
US11854157B2 (en) Edge-aware upscaling for improved screen content quality
US20090296818A1 (en) Method and system for creating an interpolated image
KR100563023B1 (en) Method and system for edge-adaptive interpolation for interlace-to-progressive conversion
US7961972B2 (en) Method and apparatus for short range motion adaptive noise reduction
US20080107335A1 (en) Methods for processing image signals and related apparatus
US8212935B2 (en) Noise reduction apparatus for image signal and method thereof
US8704945B1 (en) Motion adaptive deinterlacer
US8031945B2 (en) Image-processing device, image-processing method, program of image-processing method, and recording medium recording program of image-processing method
KR101024731B1 (en) Method and system for reducing mosquito noise in a digital image
US6124900A (en) Recursive noise reduction for progressive scan displays
CN116320204A (en) Image processing method and device
US8013935B2 (en) Picture processing circuit and picture processing method
US8995765B2 (en) Digital image processing apparatus and method
US20060028562A1 (en) Fast area-selected filtering for pixel-noise and analog artifacts reduction
US10264212B1 (en) Low-complexity deinterlacing with motion detection and overlay compensation
US7804542B2 (en) Spatio-temporal adaptive video de-interlacing for parallel processing
JP4551343B2 (en) Video processing apparatus and video processing method
CN113766114A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111476803B (en) Video processing method and related equipment
Brox et al. A fuzzy motion adaptive algorithm for interlaced-to-progressive conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination