CN115689850A - Image processing method, display device, and storage medium - Google Patents

Image processing method, display device, and storage medium Download PDF

Info

Publication number
CN115689850A
CN115689850A CN202211275073.2A CN202211275073A CN115689850A CN 115689850 A CN115689850 A CN 115689850A CN 202211275073 A CN202211275073 A CN 202211275073A CN 115689850 A CN115689850 A CN 115689850A
Authority
CN
China
Prior art keywords
image frame
pixel point
adjacent
pixel points
target pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211275073.2A
Other languages
Chinese (zh)
Inventor
周杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Xianjin Photoelectric Display Technology Research Institute
Chongqing HKC Optoelectronics Technology Co Ltd
Original Assignee
Chongqing Xianjin Photoelectric Display Technology Research Institute
Chongqing HKC Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Xianjin Photoelectric Display Technology Research Institute, Chongqing HKC Optoelectronics Technology Co Ltd filed Critical Chongqing Xianjin Photoelectric Display Technology Research Institute
Priority to CN202211275073.2A priority Critical patent/CN115689850A/en
Publication of CN115689850A publication Critical patent/CN115689850A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The present application relates to an image processing method, a display device, and a storage medium, wherein the method includes: acquiring a gray scale value of a target pixel point in a first image frame and gray scale values of N pixel points adjacent to the target pixel point; the value of N is a positive integer; summing the gray-scale values of the target pixel point and the N pixel points, and then averaging to obtain an average gray-scale value; and processing the first image frame to obtain a second image frame, wherein the gray scale value of a target pixel point in the second image frame is the average gray scale value. By the method and the device, the problem of edge sawtooth caused by DLG algorithm in the related art is solved.

Description

Image processing method, display device, and storage medium
Technical Field
The present disclosure relates to the field of image frame processing, and in particular, to an image processing method, a display device, and a storage medium.
Background
In DLG (Dual Line gate) applications, the data of M × N × F is converted into M × N/2 × 2F, so as to achieve the purpose of increasing the refresh rate, where M represents a column of 1 image frame resolution, N represents a row of 1 image frame resolution, and F represents the refresh rate, i.e., the number of times of image refreshing within 1 second. For example, a UHD (Ultra High Definition) (M =3840, n = 2160) F =60Hz data signal, and DLG can convert (M =3840, n = 1080) F =120Hz data signal, so that a 120Hz refresh rate can be achieved without increasing the data amount. In this case, although the line data becomes 1/2 of the original line data, the physical property of the LCD (Liquid Crystal Display) is still N lines, and therefore, the line data is duplicated, that is, 1,2 lines Display the same data, and similarly 3,4 lines Display the same data, and so on. As shown in FIG. 1, the (1,1) th pixel displays the gray level of A before DLG, and both the (1,1) and (2,1) display the gray level of A after DLG. This algorithm in the related art performs well when the horizontal line a or the vertical line B is displayed, but it is easy for the oblique line C to appear jagged as shown in fig. 2.
In view of the above problems in the related art, no effective solution exists at present.
Disclosure of Invention
The application provides an image processing method, a display device and a storage medium, which are used for solving the problem of edge sawtooth caused by DLG algorithm in the related art.
In a first aspect, the present application provides an image processing method, including: acquiring a gray scale value of a target pixel point in a first image frame and gray scale values of N pixel points adjacent to the target pixel point; the value of N is a positive integer; summing the gray-scale values of the target pixel point and the N pixel points, and then averaging to obtain an average gray-scale value; and processing the first image frame to obtain a second image frame, wherein the gray scale value of a target pixel point in the second image frame is the average gray scale value.
Optionally, obtaining gray scale values of N pixel points adjacent to the target pixel point includes: determining all pixel points adjacent to the target pixel point in the first image frame; determining the N pixel points from all adjacent pixel points; and acquiring the gray scale value of each pixel point in the N pixel points.
Optionally, the determining the N pixel points from all the adjacent pixel points includes: and determining the N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein the N pixel points are adjacent to each other.
Optionally, the determining the N pixel points from all adjacent pixel points includes: and determining the N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein part of the pixel points in the N pixel points are adjacent to each other.
Optionally, when a value of N is 3, the determining the N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame includes: determining a first pixel point, a second pixel point and a third pixel point in the first image frame as 3 pixel points adjacent to the target pixel point; the abscissa of the first pixel point in the first image frame is added with 1 or subtracted with 1 relative to the abscissa of the target pixel point, and the ordinate of the first pixel point in the first image frame is the same as the ordinate of the target pixel point; the abscissa of the second pixel point in the first image frame is the same as the abscissa of the target pixel point, and the ordinate of the second pixel point in the first image frame is added with 1 or subtracted with 1 relative to the ordinate of the target pixel point; the abscissa of the third pixel point is added with 1 or subtracted by 1 in the first image frame relative to the abscissa of the target pixel point, and the ordinate of the third pixel point is added with 1 or subtracted by 1 in the first image frame relative to the ordinate of the target pixel point.
Optionally, in a case that the target pixel point is located in a last row or a last column in the first image frame, the determining the N pixel points from all adjacent pixel points includes: and determining the number of all the adjacent pixel points, and determining the number as the N.
Optionally, the obtaining a gray-scale value of a target pixel point in the first image frame includes: obtaining the sorting information of the image frames in the video to be played based on time sorting; determining the first image frames in the video to be played based on the sorting information, wherein every two adjacent first image frames are separated by M image frames; the value of M is a positive integer; and acquiring a gray-scale value of a target pixel point in the first image frame.
Optionally, the obtaining a gray-scale value of a target pixel point in the first image frame includes: obtaining the sorting information of the image frames in the video to be played based on time sorting; determining the first image frame in the video to be played based on the sorting information, wherein the first image frame is an image frame in a target image frame combination, the target image frame combination comprises L first image frames which are continuous in time, and P image frames are arranged between two adjacent target image frame combinations; the values of L and P are positive integers; and acquiring a gray-scale value of a target pixel point in the first image frame.
In a second aspect, a display device is provided, which includes a display panel, a processor, a communication interface, a memory and a communication bus, wherein the display panel, the processor, the communication interface and the memory communicate with each other via the communication bus; a memory for storing a computer program; a processor configured to implement the image processing method according to the first aspect on the display panel when executing the program stored in the memory.
In a third aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
through the embodiment of the application, the average gray scale value of the target pixel point and the adjacent N pixel points in the first image frame can be used as the gray scale value of the target pixel point in the second image frame, namely, the gray scale value of the pixel point which does not have the gray scale value in the second image frame is processed by the first image frame, the gray scale value of the pixel point which originally has the gray scale value can be reduced, so that the influence of a sawtooth edge is reduced, the problem of edge sawtooth caused by a DLG algorithm in the related art is solved, and the display effect is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic diagram of an image frame after DLG processing in the related art;
fig. 2 is a second schematic diagram of an image frame after DLG processing in the related art;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an image frame after undergoing an image processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 3, the method includes:
301, acquiring a gray scale value of a target pixel point in a first image frame and gray scale values of N pixel points adjacent to the target pixel point; the value of N is a positive integer;
it should be noted that, in the embodiment of the present application, the first image frame may be an image frame in a video to be played, and the first image frame may be an image frame after DLG processing. In a specific example, as shown in fig. 4, the image frame after DLG processing is the image frame on the left, where for the pixels with gray level C that are labeled, the gray level value of the unlabeled pixels is zero. If the pixel point in row 3 and column 3 is taken as an example, that is, the pixel points adjacent to (3,3) are (2,2), (2,3), (2,4) (3,2), (3,4), (4,2), (4,3), and (4,4) 8 pixel points, the value of N may be 1 to 8.
Step 302, summing the gray-scale values of the target pixel point and the N pixel points, and then averaging to obtain an average gray-scale value;
for example, in fig. 4, if the value of N is 3, it may be (2,2), (2,3), (2,4), or (3,4), (4,3), (4,4), which may be selected according to actual conditions, or if the value of N is 4, it may be (2,2), (2,3), (2,4) (3,2), or (3,4), (4,2), (4,3), (4,4). That is to say, in this application, the value of N can be taken as a corresponding value according to the actual situation, and after the value of N is determined, which ones of the adjacent pixel points are specifically selected can also be selected correspondingly according to the actual requirement.
Step 303, processing the first image frame to obtain a second image frame, wherein the gray scale value of the target pixel point in the second image frame is an average gray scale value.
As shown in fig. 4, if the value of N is 3, and 3 adjacent pixels are pixels whose abscissa and ordinate of the first image frame are unchanged and 1 is added to 1, pixels whose abscissa and ordinate of the first image frame are unchanged, and pixels whose abscissa and ordinate of the first image frame are added to 1 and 1 are added to 1, the gray scale value of each pixel in the second image frame is the image frame on the right side in fig. 4.
It can be known from the foregoing steps 301 to 303 that the average gray scale value of the target pixel point in the first image frame and the adjacent N pixel points thereof can be used as the gray scale value of the target pixel point in the second image frame, that is, the gray scale value of the pixel point which has no gray scale value in the second image frame and is processed by the first image frame can exist after the above processing, and the gray scale value of the pixel point which originally has the gray scale value can be reduced, thereby reducing the influence of the sawtooth edge, solving the problem of the edge sawtooth caused by the DLG algorithm in the related art, and improving the display effect.
In an optional implementation manner of this embodiment of the present application, regarding the manner of obtaining gray-scale values of N pixel points adjacent to the target pixel point in step 301, the method further includes:
step 11, determining all pixel points adjacent to a target pixel point in a first image frame;
step 12, determining N pixel points from all adjacent pixel points;
and step 13, obtaining the gray scale value of each pixel point in the N pixel points.
For the above steps 11 to 13, taking fig. 4 as an example, the first image frame is the left image frame, the target pixel point in the first image frame is the pixel point with the C gray level value at the upper left corner, and there are 8 pixel points adjacent to the target, i.e. the pixel points adjacent to the target pixel point (3,3) are (2,2), (2,3), (2,4), (3,2), (3,4), (4,2), (4,3), (4,4) 8 pixel points. And only pixel point (4,3) has a gray scale value of C, and the gray scale values of the other pixel points in the first image frame are all 0.
For the manner of determining N pixel points from all adjacent pixel points involved in the above step 12, further, the method may be:
and step 21, determining N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein the N pixel points are adjacent to each other.
Also in the above fig. 4, the neighboring pixels to the target pixel () are (), (), (), (), (+)) and so on, and the neighboring pixels to each other in this application mean that the neighboring pixels between two N pixels are neighboring, and in the case that the value of N is 4, the 4 pixels may be (), (), (), () or (), (), () and so on, and the neighboring pixels between two N pixels are better in display effect after averaging compared to the N pixels not neighboring between two, for example, the target pixel is (), if the N neighboring pixels are (), (), () and so on, the average gray level of () to () and () is larger than that of () and (3, 4) The gray scale values of (a) are greatly different, thereby affecting the final display effect. Therefore, the display effect is better based on the average gray scale value obtained by N adjacent pixel points.
And step 22, determining N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein part of the pixel points in the N pixel points are adjacent to each other.
In an optional implementation manner of the embodiment of the present application, under some special conditions, if a defective pixel occurs in a pixel point around a target pixel point, under the condition that two of the N pixel points are not adjacent to each other, two of part of the N pixel points may be selected to be adjacent to each other, for example, if a value of N is 4,3 of the N pixel points may be selected to be adjacent to each other, and in addition, 1 pixel point may be selected from a pixel point adjacent to the target pixel point.
In an optional implementation manner of the embodiment of the present application, when the value of N is 3, the determining, in the step 21, N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, further may include:
step 31, determining a first pixel point, a second pixel point and a third pixel point in the first image frame as 3 pixel points adjacent to the target pixel point;
the abscissa of the first pixel point in the first image frame is added with 1 or subtracted with 1 relative to the abscissa of the target pixel point, and the ordinate of the first pixel point in the first image frame is the same as the ordinate of the target pixel point; the abscissa of the second pixel point in the first image frame is the same as the abscissa of the target pixel point, and the ordinate of the second pixel point in the first image frame is added with 1 or subtracted with 1 relative to the ordinate of the target pixel point; the abscissa of the third pixel point adds 1 or subtracts 1 to the abscissa of the target pixel point in the first image frame, and the ordinate of the third pixel point adds 1 or subtracts 1 to the ordinate of the target pixel point in the first image frame.
Taking the above-mentioned fig. 4 as an example, the pixel point adjacent to the target pixel point (3,3) is (2,2), (2,3), (2,4), (2,4), (2,4), (2,4), (2,4), (2,4), and then the 3 adjacent pixel points may be ((2,4), (2,4), (2,4), (2,4), or (2,4), (2,4), (2,4), (2,4), etc. two by two adjacent N pixel points have better display effect after averaging compared to N non-adjacent pixel points, for example, if the target pixel point is (2,4), then if the N adjacent pixel points are (2,4), (2,4) and (2,4), wherein the gray-scale values of (2,4) and (2,4) are different greatly from those of (2,4), (2,4) and (2,4), the target pixel point is based on (2,4), (2,4) and (2,4), and the average gray-scale value of (2,4) is different greatly from that of (2,4) and (2,4), thereby affecting the final display effect.
It should be noted that, when the target pixel is located in the last row or the last column of the first image frame, the method for determining N pixels from all adjacent pixels involved in the step 12 may further be: and determining the number of all adjacent pixel points, and determining the number as N.
If the target pixel point is located in the last row or the last column, taking fig. 4 as an example, the target pixel point is a pixel point with a gray scale value of C in the last column, all pixel points of the adjacent pixel points are determined as the N pixel points, so as to obtain an average gray scale value of all the adjacent pixel points, because the gray scale value of the adjacent pixel points in the last row or the last column is less than 8, because the average gray scale value of all the adjacent pixel points is used as the gray scale value of the pixel point, because other pixel points cannot be influenced, the display effect is better.
In a further optional implementation manner of this embodiment of this application, regarding the manner of obtaining the gray-scale value of the target pixel point in the first image frame in step 301, the method may further include:
step 41, obtaining the sorting information of the image frames in the video to be played based on time sorting;
step 42, determining first image frames in the video to be played based on the sorting information, wherein every two adjacent first image frames are separated by M image frames; the value of M is a positive integer;
and 43, acquiring the gray-scale value of the target pixel point in the first image frame.
As can be seen from the above steps 41 to 43, in the present application, each video frame in the video to be played is not processed, but every M image frames are processed accordingly. The value of M can be set according to actual conditions, for example, the value of M is 2,3. It should be noted that although the edge aliasing effect is reduced by the processing manner of the above steps 301 to 303, the problem of edge blurring is caused, and for the video to be played, the corresponding processing of the above steps 301 to 303 is performed on a part of the image frame, and the visual effect caused by not performing the corresponding processing of the above steps 301 to 303 in time can be a certain amount of effect of reducing the edge blurring. That is to say, not only the effect of reducing the edge saw teeth but also the effect of reducing the edge fuzziness are achieved through the method and the device.
In a further optional implementation manner of this embodiment of this application, regarding the manner of obtaining the gray-scale value of the target pixel point in the first image frame in step 301, the method may further include:
step 51, obtaining the sorting information of the image frames in the video to be played based on time sorting;
step 52, determining a first image frame in the video to be played based on the sorting information, wherein the first image frame is an image frame in a target image frame combination, the target image frame combination comprises L first image frames which are continuous in time, and P image frames are arranged between two adjacent target image frame combinations; the values of L and P are positive integers;
and 53, acquiring a gray-scale value of a target pixel point in the first image frame.
In the above steps 41 to 43, the above steps 301 to 303 are performed for every M image frames for one image frame, and in the above steps 51 to 53, the above steps 301 to 303 are performed for every P image frame for the target image frame combination, and the values of P and M may be the same or different. The number of image frames L in the target image frame combination may be an integer greater than 1, such as 2,3. In this way, similar to the effect of the above steps 41 to 43, not only the effect of reducing edge jaggies but also the effect of reducing edge blurs are achieved.
Corresponding to fig. 3, an embodiment of the present application further provides an image processing apparatus, including:
the acquisition module is used for acquiring the gray-scale value of a target pixel point in the first image frame and the gray-scale values of N pixel points adjacent to the target pixel point; the value of N is a positive integer;
the first processing module is used for summing the gray-scale values of the target pixel point and the N pixel points and then averaging the gray-scale values to obtain an average gray-scale value;
and the second processing module is used for processing the first image frame to obtain a second image frame, wherein the gray scale value of the target pixel point in the second image frame is an average gray scale value.
Through the device, the average gray scale value of the target pixel point and the adjacent N pixel points in the first image frame can be used as the gray scale value of the target pixel point in the second image frame, namely, the gray scale value of the pixel point which does not have the gray scale value in the second image frame is processed by the first image frame, the gray scale value of the pixel point which originally has the gray scale value can be reduced, the influence of the sawtooth edge is reduced, the problem of edge sawtooth caused by a DLG algorithm in the related art is solved, and the display effect is improved.
In an optional implementation manner of the embodiment of the present application, the obtaining module in the embodiment of the present application further includes: the first determining unit is used for determining all pixel points adjacent to the target pixel point in the first image frame; the second determining unit is used for determining N pixel points from all adjacent pixel points; the first obtaining unit is used for obtaining the gray-scale value of each pixel point in the N pixel points.
In an optional implementation manner of the embodiment of the present application, the second determining unit includes: the first determining subunit is configured to determine, based on coordinate information of the pixels in the first image frame, N pixels from all adjacent pixels, where the N pixels are adjacent to each other.
In an optional implementation manner of the embodiment of the present application, the second determining unit may further include: and the second determining subunit is used for determining N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein part of the N pixel points are adjacent to each other.
In an optional implementation manner of the embodiment of the present application, in a case that a value of N is 3, the first determining subunit includes: the third determining subunit is used for determining the first pixel point, the second pixel point and the third pixel point in the first image frame as 3 pixel points adjacent to the target pixel point; the abscissa of the first pixel point in the first image frame is added with 1 or subtracted with 1 relative to the abscissa of the target pixel point, and the ordinate of the first pixel point in the first image frame is the same as the ordinate of the target pixel point; the abscissa of the second pixel point in the first image frame is the same as the abscissa of the target pixel point, and the ordinate of the second pixel point in the first image frame is added with 1 or subtracted with 1 relative to the ordinate of the target pixel point; the abscissa of the third pixel point adds 1 or subtracts 1 to the abscissa of the target pixel point in the first image frame, and the ordinate of the third pixel point adds 1 or subtracts 1 to the ordinate of the target pixel point in the first image frame.
In an optional implementation manner of the embodiment of the present application, in a case where the target pixel point is located in a last row or a last column in the first image frame, the first determining subunit includes: and the fourth determining subunit is used for determining the number of all adjacent pixel points, and determining the number as N.
In an optional implementation manner of the embodiment of the present application, the obtaining module further may include: the second acquisition unit is used for acquiring the sequencing information of the image frames in the video to be played based on time sequencing; the third determining unit is used for determining first image frames in the video to be played based on the sorting information, wherein M image frames are arranged between every two adjacent first image frames; the value of M is a positive integer; and the third acquisition unit is used for acquiring the gray-scale value of the target pixel point in the first image frame.
In an optional implementation manner of the embodiment of the present application, the obtaining module 52 further may include: the fourth acquisition unit is used for acquiring the sequencing information of the image frames in the video to be played based on time sequencing; the fourth determining unit is used for determining a first image frame in the video to be played based on the sequencing information, wherein the first image frame is an image frame in a target image frame combination, the target image frame combination comprises L first image frames which are continuous in time, and P image frames are arranged between every two adjacent target image frame combinations; the values of L and P are positive integers; and the fifth acquisition unit is used for acquiring the gray-scale value of the target pixel point in the first image frame.
As shown in fig. 5, the embodiment of the present application provides a display device, which includes a display device 110, a processor 111, a communication interface 112, a memory 113, and a communication bus 114, wherein the processor 111, the communication interface 112, and the memory 113 complete communication with each other through the communication bus 114,
a memory 113 for storing a computer program;
in an embodiment of the present application, when the processor 111 is configured to execute the program stored in the memory 113, the image processing method provided in any one of the foregoing method embodiments is also similar in function, and is not described herein again.
The present application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the image processing method as provided in any one of the method embodiments described above.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method, comprising:
acquiring a gray scale value of a target pixel point in a first image frame and gray scale values of N pixel points adjacent to the target pixel point; the value of N is a positive integer;
summing the gray-scale values of the target pixel point and the N pixel points, and then averaging to obtain an average gray-scale value;
and processing the first image frame to obtain a second image frame, wherein the gray scale value of a target pixel point in the second image frame is the average gray scale value.
2. The method of claim 1, wherein obtaining gray scale values for N pixels adjacent to the target pixel comprises:
determining all pixel points adjacent to the target pixel point in the first image frame;
determining the N pixel points from all adjacent pixel points;
and acquiring the gray scale value of each pixel point in the N pixel points.
3. The method of claim 2, wherein said determining said N pixels from all adjacent pixels comprises:
and determining the N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein the N pixel points are adjacent to each other.
4. The method of claim 2, wherein said determining said N pixels from all adjacent pixels comprises:
and determining the N pixel points from all adjacent pixel points based on the coordinate information of the pixel points in the first image frame, wherein part of the pixel points in the N pixel points are adjacent to each other.
5. The method of claim 3, wherein in a case that a value of N is 3, the determining N pixels from all adjacent pixels based on the coordinate information of the pixels in the first image frame comprises:
determining a first pixel point, a second pixel point and a third pixel point in the first image frame as 3 pixel points adjacent to the target pixel point;
the abscissa of the first pixel point in the first image frame is added with 1 or subtracted with 1 relative to the abscissa of the target pixel point, and the ordinate of the first pixel point in the first image frame is the same as the ordinate of the target pixel point; the abscissa of the second pixel point in the first image frame is the same as the abscissa of the target pixel point, and the ordinate of the second pixel point in the first image frame is added with 1 or subtracted with 1 relative to the ordinate of the target pixel point; the abscissa of the third pixel point is added with 1 or subtracted by 1 in the first image frame relative to the abscissa of the target pixel point, and the ordinate of the third pixel point is added with 1 or subtracted by 1 in the first image frame relative to the ordinate of the target pixel point.
6. The method of claim 2, wherein said determining said N pixels from all adjacent pixels if said target pixel is located in a last row or a last column of said first image frame comprises:
and determining the number of all the adjacent pixel points, and determining the number as the N.
7. The method of claim 1, wherein obtaining the gray-scale value of the target pixel point in the first image frame comprises:
obtaining the sorting information of the image frames in the video to be played based on time sorting;
determining the first image frames in the video to be played based on the sorting information, wherein every two adjacent first image frames are separated by M image frames; the value of M is a positive integer;
and acquiring a gray-scale value of a target pixel point in the first image frame.
8. The method of claim 1, wherein obtaining the gray-scale value of the target pixel point in the first image frame comprises:
acquiring sequencing information of image frames in a video to be played based on time sequencing;
determining the first image frame in the video to be played based on the sorting information, wherein the first image frame is an image frame in a target image frame combination, the target image frame combination comprises L first image frames which are continuous in time, and P image frames are arranged between two adjacent target image frame combinations; the values of L and P are positive integers;
and acquiring a gray-scale value of a target pixel point in the first image frame.
9. The display device is characterized by comprising a display panel, a processor, a communication interface, a memory and a communication bus, wherein the display panel, the processor and the communication interface are communicated with each other through the communication bus by the memory;
a memory for storing a computer program;
a processor for implementing the image processing method according to any one of claims 1 to 8 on the display panel when executing a program stored in a memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any of claims 1 to 8.
CN202211275073.2A 2022-10-18 2022-10-18 Image processing method, display device, and storage medium Pending CN115689850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211275073.2A CN115689850A (en) 2022-10-18 2022-10-18 Image processing method, display device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211275073.2A CN115689850A (en) 2022-10-18 2022-10-18 Image processing method, display device, and storage medium

Publications (1)

Publication Number Publication Date
CN115689850A true CN115689850A (en) 2023-02-03

Family

ID=85067459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211275073.2A Pending CN115689850A (en) 2022-10-18 2022-10-18 Image processing method, display device, and storage medium

Country Status (1)

Country Link
CN (1) CN115689850A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116504190A (en) * 2023-02-27 2023-07-28 广州文石信息科技有限公司 Image processing method and device for electronic ink screen and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116504190A (en) * 2023-02-27 2023-07-28 广州文石信息科技有限公司 Image processing method and device for electronic ink screen and related equipment
CN116504190B (en) * 2023-02-27 2024-04-09 广州文石信息科技有限公司 Image processing method and device for electronic ink screen and related equipment

Similar Documents

Publication Publication Date Title
EP3640931A1 (en) Method of compensating mura defect of display panel, and display panel
WO2018201512A1 (en) Method of compensating mura defect of display panel, and display panel
JP4002871B2 (en) Method and apparatus for representing color image on delta structure display
WO2018201533A1 (en) Method of compensating mura defect of display panel, and display panel
US8837854B2 (en) Image processing method for boundary resolution enhancement
US7869665B2 (en) Composite method and apparatus for adjusting image resolution
CN108109109B (en) Super-resolution image reconstruction method, device, medium and computing equipment
CN107342037B (en) Data conversion method, device and computer readable storage medium
US8830257B2 (en) Image displaying apparatus
CN115689850A (en) Image processing method, display device, and storage medium
CN112614468B (en) Brightness compensation method and system of display panel
CN116052605A (en) Ink screen refreshing method and device, ink screen equipment and storage medium
CN113538271A (en) Image display method, image display device, electronic equipment and computer readable storage medium
CN104134189B (en) A kind of method and device of image amplification
KR100271261B1 (en) Image data processing method and image data processing apparatus
US7532773B2 (en) Directional interpolation method and device for increasing resolution of an image
JPWO2014102876A1 (en) Image processing apparatus and image processing method
CN117524144A (en) Display panel compensation method, display device and storage medium
US9594955B2 (en) Modified wallis filter for improving the local contrast of GIS related images
CN112261242B (en) Image data processing method and device
US6529642B1 (en) Method for enlarging/reducing digital images
CN113035147B (en) Gray scale compensation method and device of display panel
CN112001976A (en) Dynamic image processing method
EP2509045B1 (en) Method of, and apparatus for, detecting image boundaries in video data
US10163191B2 (en) Method of scaling up image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination