CN112308794A - Method and apparatus for correcting display image, and computer-readable storage medium - Google Patents

Method and apparatus for correcting display image, and computer-readable storage medium Download PDF

Info

Publication number
CN112308794A
CN112308794A CN202011167154.1A CN202011167154A CN112308794A CN 112308794 A CN112308794 A CN 112308794A CN 202011167154 A CN202011167154 A CN 202011167154A CN 112308794 A CN112308794 A CN 112308794A
Authority
CN
China
Prior art keywords
image
perspective transformation
corner points
coordinate values
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011167154.1A
Other languages
Chinese (zh)
Inventor
杨剑锋
陈林
夏大学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL Digital Technology Co Ltd
Original Assignee
Shenzhen TCL Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL Digital Technology Co Ltd filed Critical Shenzhen TCL Digital Technology Co Ltd
Priority to CN202011167154.1A priority Critical patent/CN112308794A/en
Publication of CN112308794A publication Critical patent/CN112308794A/en
Priority to PCT/CN2021/124839 priority patent/WO2022089263A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention relates to the technical field of image processing, and discloses a method, equipment and a computer readable storage medium for correcting a display image, wherein the method comprises the following steps: extracting a reference brightness image from an image to be detected; extracting characteristic points of the reference luminance image, and performing perspective transformation processing on the reference luminance image based on the characteristic points; and carrying out calibration processing on the standard brightness image subjected to perspective conversion processing to obtain a corresponding corrected image. Therefore, perspective transformation and correction processing are carried out on the image to be detected, an image capable of completely filling the display panel is obtained, and the display effect of the image is improved.

Description

Method and apparatus for correcting display image, and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for correcting a display image, and a computer-readable storage medium.
Background
When the display panel displays the display image collected by the camera, the display image displayed on the display panel is deformed to a certain extent due to the reasons of the display panel such as placing inclination, camera shooting angle deviation and camera lens distortion, so that the display effect is poor.
Disclosure of Invention
The invention provides a method and equipment for correcting a display image and a computer readable storage medium, aiming at improving the display effect of the display image.
To achieve the above object, the present invention provides a correction method of a display image, the method comprising:
extracting a reference brightness image from an image to be detected;
extracting characteristic points of the reference brightness image;
performing perspective transformation processing on the reference luminance image based on the feature point;
and correcting the standard brightness image subjected to perspective transformation to obtain a corresponding corrected image.
Optionally, the performing perspective transformation processing on the reference luminance image based on the feature point includes:
detecting at least four corner points in the reference brightness image, and filtering the at least four corner points according to a preset flow to obtain at least four target corner points;
determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values of the at least four target corner points in the reference brightness image;
and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation.
Optionally, the step of determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference luminance image based on the perspective transformation matrix to obtain a reference luminance image subjected to perspective transformation processing includes:
extracting a plurality of initial sub-regions from the reference luminance image based on the at least four target corner points;
extracting a plurality of basic sub-regions from a region surrounded by the at least four absolute corner points, wherein the basic sub-regions correspond to the initial sub-regions;
determining a first characteristic coordinate value of each initial subregion based on the initial coordinate values of at least four target corner points, and determining a second characteristic coordinate value of each basic subregion based on the correction coordinate values of at least four absolute corner points;
respectively substituting the first characteristic coordinate values and the second characteristic coordinate values corresponding to the first characteristic coordinate values into a perspective transformation formula to determine a plurality of partition perspective transformation matrixes;
and transforming the corresponding initial sub-regions based on the partition perspective transformation matrixes respectively, and obtaining the reference brightness image subjected to perspective transformation after the transformation of each initial sub-region is finished.
Optionally, the step of determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values thereof in the reference luminance image includes:
acquiring pixel values of all target corner points in the reference brightness image, and marking coordinate values of the target corner points with the pixel values as preset values as initial coordinate values;
determining the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction according to the initial coordinate values;
and determining correction coordinate values of at least four absolute corner points according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
Optionally, the extracting a reference luminance image of the image to be detected further includes:
extracting a reference brightness image displayed by a preset pixel order from an image to be detected;
and marking the pixel value of each point in the reference brightness image as a preset pixel value or 0 to obtain a binary image of the reference brightness image.
Optionally, the performing a correction process on the reference luminance image subjected to the perspective transformation process includes:
detecting and acquiring the outline of the reference brightness image subjected to perspective transformation;
and performing correction processing based on the outline of the reference brightness image subjected to perspective transformation processing.
Optionally, the performing a correction process based on the contour of the reference luminance image subjected to the perspective transformation process includes:
and carrying out distortion correction on the reference brightness image subjected to perspective transformation processing and falling in the minimum bounding rectangle to obtain a corrected image for displaying in a target display area.
Optionally, the performing a correction process based on the contour of the reference luminance image subjected to the perspective transformation process includes:
and correcting the outline of the reference brightness image subjected to perspective transformation processing by a correction factor.
Furthermore, in order to achieve the above object, the present invention also provides a correction apparatus for a display image, including a processor, a memory, and a correction program for a display image stored in the memory, the correction program for a display image being executed by the processor to implement the steps of the correction method for a display image as described in any one of the above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a correction program of a display image, which when executed by a processor, realizes the steps of the correction method of the display image as described above.
Compared with the prior art, the invention provides a method, equipment and a computer readable storage medium for correcting a display image, wherein the method comprises the following steps: extracting a reference brightness image from an image to be detected; extracting characteristic points of the reference luminance image, and performing perspective transformation processing on the reference luminance image based on the characteristic points; and carrying out calibration processing on the standard brightness image subjected to perspective conversion processing to obtain a corresponding corrected image. Therefore, perspective transformation and correction processing are carried out on the image to be detected, an image capable of completely filling the display panel is obtained, and the display effect of the image is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware configuration of a correction apparatus for a display image according to embodiments of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for correcting a display image according to the present invention;
FIG. 3 is a diagram illustrating a first scenario of a first embodiment of a method for correcting a display image according to the present invention;
FIG. 4 is a diagram illustrating a second scenario of a first embodiment of a method for correcting a display image according to the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of a method for correcting a display image according to the present invention;
FIG. 6 is a diagram illustrating a first scenario of a second embodiment of a method for correcting a display image according to the present invention;
FIG. 7 is a functional block diagram of a correction apparatus for display image according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The correction device for displaying images mainly related to the embodiment of the invention refers to a network connection device capable of realizing network connection, and the correction device for displaying images can be a server, a cloud platform and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware configuration of a correction apparatus for a display image according to embodiments of the present invention. In the embodiment of the present invention, the correction device for displaying an image may include a processor 1001 (e.g., a Central Processing Unit, CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the input port 1003 is used for data input; the output port 1004 is used for data output, the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to fig. 1, the memory 1005 of fig. 1, which is a readable storage medium, may include an operating system, a network communication module, an application program module, and a correction program for displaying an image. In fig. 1, the network communication module is mainly used for connecting to a server and performing data communication with the server; and the processor 1001 may call a correction program of the display image stored in the memory 1005 and execute the correction method of the display image according to the embodiment of the present invention.
The embodiment of the invention provides a method for correcting a display image.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for correcting a display image according to a first embodiment of the present invention.
In this embodiment, the method for correcting a display image is applied to a correction apparatus for a display image, and the method includes:
step S101, extracting a reference brightness image from an image to be detected;
step S102, extracting characteristic points of the reference brightness image;
step S103, performing perspective transformation processing on the reference luminance image based on the characteristic point;
step S103, performs correction processing on the reference luminance image subjected to perspective conversion processing, and obtains a corresponding corrected image.
In this embodiment, the Display image may be displayed on at least one of an LCD (Liquid Crystal Display,
liquid crystal display) or Mini LED (Light Emitting Diode) display screen,
specifically, the display image referred to in step S101 may be a color image in an RGB (red green blue) color mode, and the display image has different gray scales. The gray scale refers to the hierarchical relationship of brightness between the darkest black and the brightest white of the display, and is the expression in the aspects of light and shade contrast and black and white color transition, and the clearer the image is, the better the transition is. Generally, 32-level gray scale and 256-level gray scale are mainly used as gray scales.
Specifically, the step S101 is followed by: extracting a reference brightness image displayed by a preset pixel order from an image to be detected;
further, labeling the pixel value of each point in the reference brightness image as a preset pixel value or 0, and obtaining a binary image of the reference brightness image.
Specifically, imaging data of an industrial camera is converted into an image to be detected in an image data format. And extracting a reference brightness image displayed by the gray scale of the appointed pixel from the image to be detected. Obtaining the pixel of each pixel point, labeling the pixel value of the pixel point of which the pixel is greater than the preset pixel value as a specified pixel value, labeling the pixel value of the pixel point of which the pixel is less than or equal to the preset pixel value as 0, thus labeling the pixel value of each point in the reference brightness image as the specified pixel value or 0, and setting the binarization parameter as 0.25 × max (i), wherein max (i) is the maximum pixel value. Thereby, a reference luminance image of the reference luminance image can be obtained. In this embodiment, the designated pixel may be a 255, 32, etc. gray-scale pixel, and max (i) is 255, 32, etc. correspondingly.
Further, after the reference luminance image is obtained, the display area of the reference luminance image is coarsely positioned, and a more accurate binarized image of the coarsely positioned reference luminance image is obtained. Specifically, a binarized contour of the reference luminance image is detected, and a contour area of the binarized contour is calculated. In this embodiment, extraction of the binarized contour may be realized based on a tensoflow neural convolutional network, and in the contour extraction process, when the contour area is greater than a first preset contour area threshold, the corresponding binarized contour is marked as an effective binarized contour. Further, extracting a minimum circumscribed rectangle from the effective binary outline, and expanding based on the coordinate of the minimum circumscribed rectangle to obtain a roughly positioned reference brightness image. The Minimum Bounding Rectangle (MBR) refers to the maximum range of a number of two-dimensional shapes (e.g., points, lines, polygons) expressed in two-dimensional coordinates, i.e., a rectangle whose lower boundary is defined by the maximum abscissa, the minimum abscissa, the maximum ordinate, and the minimum ordinate of each vertex of a given two-dimensional shape.
Subtracting preset values from the x and y coordinates of the upper left vertex of the minimum circumscribed rectangle, subtracting the preset values from the x coordinate of the lower left vertex of the minimum circumscribed rectangle, adding the preset values to the x and y coordinates of the lower right vertex of the minimum circumscribed rectangle, and keeping the x and y coordinates of the upper right vertex of the minimum circumscribed rectangle unchanged to obtain the coarsely positioned reference brightness image. Specifically, referring to fig. 3, fig. 3 is a schematic diagram of a first scenario of a first embodiment of the correction method for a display image according to the present invention, in which the valid binarized contour is represented as a solid-line frame a, the minimum bounding rectangle extracted from the valid binarized contour is a dashed-line rectangle frame b in fig. 3, the roughly positioned contour is a dashed-line frame c in fig. 3, and the size of the preset value is related to the minimum bounding rectangle and the roughly positioned contour, and may be d as shown in fig. 3.
In addition, the Mini LED display screen consists of a plurality of independent and separated Mini LED lamp beads. The Mini LED is different from the uniform light emission of the whole area of the LCD, the local part of the Mini LED display is discrete lamp beads, and a plurality of lamp beads form the Mini LED display which emits light integrally. Therefore, the graphics processing is required after the adaptive binarization processing. The specific process of graphics is that firstly, an expansion convolution kernel is constructed, the size of the convolution kernel can be set to be (20, 20), the expansion type can be an elliptical structure, and then the expansion processing is adopted to expand the discrete areas of the lamp beads one by one to a continuous display area corresponding to a display image.
After acquiring the binarized image of the reference luminance image, executing step S102 to step S103: extracting characteristic points of the reference brightness image; and performing perspective transformation processing on the reference luminance image based on the characteristic point.
It should be noted that the reference luminance image in steps S102 to S103 is a binarized image of the reference luminance image.
The Perspective Transformation (Perspective Transformation) is a Transformation that a projection geometry on a projection surface is kept unchanged by rotating the projection surface (Perspective surface) around a trace line (Perspective axis) by a certain angle according to a Perspective rotation law under the condition that three points of a Perspective center, an image point and a target point are collinear.
In general, the perspective transformation formula can be expressed as:
Figure BDA0002744708660000071
wherein, [ x ]0,y0,z0]Initial coordinate value, [ x ] representing a reference luminance image1,y1,z1]A correction coordinate value indicating the corrected preliminary corrected image,
Figure BDA0002744708660000072
representing a perspective transformation matrix. Therefore, the perspective transformation matrix can be determined according to the coordinate values of the corresponding points before and after correction, and the reference luminance image to be transformed can be transformed based on the perspective transformation matrix.
Specifically, a plurality of corner points of the reference luminance image may be extracted by a corner point detection method, and then a plurality of feature points of the reference luminance image are extracted from the plurality of corner points, where the plurality of feature points may be a plurality of feature points in 8 points, such as corner points of four corners of the reference luminance image, and midpoints of edges of four sides. When determining a reference luminance imageAfter the plurality of feature points, coordinate values of the primarily corrected image after correction are determined based on the plurality of feature points. It is to be understood that, ideally, the display image displayed on the display panel is a rectangular image of a certain size, and the rectangular image has substantially the same size as the reference luminance image with a slight difference, so that the coordinate values of the corrected preliminary corrected image can be determined based on the coordinate values of the plurality of feature points of the reference luminance image. For example, the maximum x value x in the x and y axis directions among the coordinate values of the plurality of feature points is obtainedmaxMinimum x value xminMaximum y value ymaxMinimum y value ymin. Then, coordinate values of the corrected preliminary corrected image are determined as (x)min,ymax),(xmin,ymin),(xmax,ymin),(xmax,ymax)。
In this way, a perspective transformation matrix can be determined based on the coordinate values of the plurality of feature points of the reference luminance image and the coordinate values of the corrected preliminary corrected image, and the reference luminance image to be transformed can be transformed based on the perspective transformation matrix to obtain a corrected image.
In this way, a perspective transformation matrix can be determined based on the coordinate values of at least four corner points of the reference luminance image and the coordinate values of the corrected reference luminance image subjected to perspective transformation processing, and the reference luminance image subjected to perspective transformation processing is obtained by transforming the reference luminance image to be transformed based on the perspective transformation matrix, and generally, the reference luminance image subjected to perspective transformation processing is a perspective projection image of a binarized image of the reference luminance image.
When the reference luminance image subjected to the perspective transformation processing is obtained, the step S104 is executed: and correcting the standard brightness image subjected to perspective transformation to obtain a corresponding corrected image.
Specifically, the contour of the reference luminance image subjected to the perspective transformation processing is detected and acquired, and the correction processing is performed based on the contour of the reference luminance image subjected to the perspective transformation processing.
Specifically, extracting a minimum circumscribed rectangle of the outline, and obtaining a reference brightness image which is in the minimum circumscribed rectangle and is subjected to perspective transformation processing; and carrying out distortion correction on the reference brightness image subjected to perspective transformation processing and falling in the minimum bounding rectangle to obtain a corrected image for displaying in a target display area.
Specifically, extracting the contour is realized based on a tensoflow neural convolution network, in the contour extracting process, the contour area of the contour is calculated, and when the contour area is larger than a second preset contour area threshold value, the corresponding contour is marked as an effective contour. And further extracting the minimum circumscribed rectangle of the outline to obtain a reference brightness image which is in the minimum circumscribed rectangle and is subjected to perspective transformation processing.
In addition, the outline of the reference luminance image subjected to the perspective transformation processing may be corrected by a correction factor, and the reference luminance image subjected to the perspective transformation processing may be obtained. Specifically, firstly, a plurality of vertex coordinates of the contour are determined, a correction factor of each boundary is determined based on the vertex coordinates, and then the contour is corrected based on the correction factor of each boundary, so that a reference brightness image subjected to perspective transformation processing is obtained. Specifically, the upper left vertex and the lower right vertex of the contour are determined as (x)1,y1),(x2,y2) And judging whether the pixel of each pixel point on each edge of the horizontal positioning rectangle is equal to a specific pixel value, where the specific pixel value may be 255 in this embodiment. Specifically, the equal-width lines of the horizontal positioning rectangle comprise an upper sideline, a lower sideline, a left sideline and a right sideline.
In this embodiment, the coordinate value of the top edge is represented as Iup([x1,x2],d1),d1Is given an initial value of y1Judging whether each pixel point of the upper edge line has a pixel point with a pixel value being a specific pixel value, if one or more pixel points with pixel values being the specific pixel values exist in each pixel point of the upper edge line, determining a correction factor of the upper edge line as d1-y1If said is at the topIf one or more pixel points with specific pixel values do not exist in each pixel point of the edge line, determining the correction factor of the upper edge line as d1-1。
Representing the coordinate value of the lower edge line as Idown([x1,x2],d2),d1Is given an initial value of y2Judging whether each pixel point of the lower edge line has a pixel point with a pixel value being a specific pixel value, if one or more pixel points with pixel values being the specific pixel values exist in each pixel point of the lower edge line, determining a correction factor of the lower edge line as y2-d2If one or more pixel points with the pixel values being specific pixel values do not exist in all the pixel points of the lower edge line, determining the correction factor of the lower edge line as d2-1。
Representing the coordinate value of the left edge as Ileft(d3,[y1,y2]),d3Is x1Judging whether each pixel point of the left side line has a pixel point with a pixel value being a specific pixel value, if one or more pixel points with pixel values being specific pixel values exist in each pixel point of the left side line, determining a correction factor of the left side line as d3-x1If one or more pixel points with the pixel values being specific pixel values do not exist in all the pixel points of the left edge line, determining the correction factor of the left edge line as d3+1。
Representing the coordinate value of the right edge as Irightt(d4,[y1,y2]),d3Is x1Judging whether each pixel point of the right edge line has a pixel point with a pixel value being a specific pixel value, if one or more pixel points with pixel values being the specific pixel values exist in each pixel point of the right edge line, determining a correction factor of the right edge line as x2-d4If one or more pixel points with the pixel values being specific pixel values do not exist in all the pixel points of the right edge line, determining the correction factor of the right edge line as d4-1。
Based on the above correction factors, passing throughUpper left vertex and lower right vertex (x ') of reference luminance image subjected to perspective conversion'1,y′1),(x′2,y′2) Respectively determining as follows: (x'1,y′1)=(d3,d1),(x′2,y′2)=(d4,d2). And then, the region of the reference brightness image subjected to perspective transformation can be determined based on the coordinates of the upper left vertex and the lower right vertex of the reference brightness image subjected to perspective transformation.
Further, since the camera may have a cause of image abnormality due to a photographing angle deviation, a camera lens distortion, and the like during photographing, a barrel distortion radial distortion may occur to the photographed image. In this embodiment, the reference luminance image subjected to the perspective transformation process and falling within the minimum bounding rectangle is corrected based on a division model. Specifically, arc extraction is carried out on each edge contour of the contour by utilizing a rapid arc extraction method respectively to obtain an arc corresponding to each edge, and parameters of each arc are calculated respectively; defining a distortion center preselected region by taking the reference brightness image subjected to perspective transformation processing in the minimum circumscribed rectangle as a center, calculating distortion coefficients of arcs corresponding to the distortion center by taking each pixel point in the distortion center preselected region as the distortion center based on a general equation of a circle and according to parameters of the arcs, counting value concentration intervals of the distortion coefficients of the arcs corresponding to the distortion center by taking each pixel point as the distortion center, counting the number of the distortion coefficients in the value concentration intervals, calculating the mean value of all the distortion coefficients in the value concentration intervals corresponding to the distortion center by taking each pixel point as the distortion center, taking the pixel point corresponding to the value concentration interval with the largest number of distortion coefficients as an actual distortion center, and taking the mean value of all the distortion coefficients in the value concentration interval with the largest number of distortion coefficients as an actual distortion coefficient; and automatically correcting the distorted image according to the actual distortion center and the actual distortion coefficient to obtain a corrected image displayed in a target display area. In this way, the corrected image can be completely filled in the display panel. Referring to fig. 4, fig. 4 is a schematic diagram of a second scene of the first embodiment of the method for correcting a display image according to the present invention, in which the reference luminance image (on the right side of fig. 4) obtained after perspective transformation is not a complete rectangle, and after secondary correction, a complete rectangle (on the left side of fig. 4) can be obtained.
According to the scheme, the reference brightness image is extracted from the image to be detected; extracting characteristic points of the reference luminance image, and performing perspective transformation processing on the reference luminance image based on the characteristic points; and carrying out calibration processing on the standard brightness image subjected to perspective conversion processing to obtain a corresponding corrected image. Therefore, perspective transformation and correction processing are carried out on the image to be detected, an image capable of completely filling the display panel is obtained, and the display effect of the image is improved.
As shown in fig. 5, a second embodiment of the present invention proposes a method for correcting a display image, based on the first embodiment shown in fig. 2, wherein the step of performing perspective transformation on the reference luminance image based on the at least four feature points to obtain a reference luminance image subjected to perspective transformation processing includes:
step S201: detecting at least four corner points in the reference brightness image, and filtering the at least four corner points according to a preset flow to obtain at least four target corner points;
step S202: determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values of the at least four target corner points in the reference brightness image;
step S203: and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation.
In this embodiment, at least four corner points in the reference luminance image are detected by a harris corner point detection method. The Harris corner extraction algorithm is a corner extraction algorithm through an autocorrelation matrix developed by Chris Harris and Mike Stephens on the basis of an h.moravec algorithm, and is also called Plessey algorithm. Harris corner extraction algorithm this operator is inspired by the number of autocorrelation surfaces in signal processing, giving a matrix M associated with the autocorrelation function. The eigenvalues of the M matrix are the first order curvatures of the autocorrelation function, and if both curvature values are high, then the point is considered to be a corner feature. Firstly, filtering each pixel of an image by using a horizontal and vertical difference operator to obtain a filtering pixel, and determining a filtering matrix based on the filtering pixel; and performing Gaussian smoothing on a filter value in the filter matrix, eliminating unnecessary isolated points and bulges, obtaining a new filter matrix, obtaining a corner response function of each pixel based on the new filter matrix, and determining pixel points of the reference brightness image, of which the corner response function is greater than a corner threshold value, as corners, so that at least four corners can be obtained.
Because the number of the angular points is large, and the perspective transformation generally only needs 4 points. Therefore, the corner points need to be filtered to obtain the target corner point. In this embodiment, filtering is performed based on the coordinate values of the respective corner points.
Specifically, the maximum value and the minimum value of coordinates in the x-axis direction and the x-axis direction in all corner points are determined and expressed as: x is the number ofmin,xmax,ymin,ymaxThen calculating the center point value X of the coordinates in the X-axis direction in all the corner pointsmidAnd a midpoint value Y in the Y-axis directionmidWherein
Figure BDA0002744708660000111
Then, calculating the distance d from each corner point to four different extreme points, namely, the upper left corner, the upper right corner, the lower left corner and the lower right corner, and from each corner point to four boundary center points, wherein the distance calculation formula is calculated as follows (it is noted that in the coordinate axes of the image, the upper left corner of the image is the origin of coordinates):
distance d from angular point to upper left extreme point1:
Figure BDA0002744708660000112
Distance d from angular point to upper right extreme point2:
Figure BDA0002744708660000113
Distance d from angular point to lower left extreme point3:
Figure BDA0002744708660000114
Distance d from angular point to lower right extreme point4:
Figure BDA0002744708660000115
Distance d from corner point to center point of upper boundary5:
Figure BDA0002744708660000116
Distance d from corner point to center point of lower boundary6
Figure BDA0002744708660000117
Distance d from corner point to center point of right boundary7
Figure BDA0002744708660000118
Distance d from corner point to center point of left boundary8
Figure BDA0002744708660000119
After calculating each distance, determining a point with the smallest distance value as a corresponding target corner point, where the target corner point includes a target vertex and a center corner point in this embodiment. The target vertex comprises an upper left vertex corner point, an upper right vertex corner point, a lower left vertex corner point and a lower right vertex corner point, and the central corner point comprises: an upper boundary center corner point, a lower boundary center corner point, a left boundary center corner point, and a right boundary center corner point. Specifically, since the minimum distance value is close to 0, the respective target corner points are respectively determined as:
top left vertex angle: pleft_up=Min(d1(pi));
Upper right vertex angle point: pright_up=Min(d2(pi));
Lower left vertex corner: pleft_down=Min(d3(pi));
Lower right vertex corner: pright_down=Min(d4(pi));
Upper boundary center corner point: cup=Min(d5(pi));
Lower boundary center corner point: cdown=Min(d6(pi));
Left boundary center corner point: cleft=Min(d7(pi));
Right border center corner point: cright=Min(d8(pi))。
Referring further to fig. 3, P in fig. 3 denotes an upper right corner point of the reference luminance image, and C denotes a right boundary center point of the reference luminance image.
After the target corner point is obtained, step S202 is executed: and determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and pixels thereof in the reference brightness image. The absolute corner points are the characteristic points of the reference luminance image subjected to perspective transformation, and correspond to each target corner point in the reference luminance image.
The step S202 includes: acquiring pixel values of all target corner points in the reference brightness image, and marking coordinate values of the target corner points with the pixel values as preset values as initial coordinate values; determining the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction according to the initial coordinate values; and determining correction coordinate values of at least four absolute corner points according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
Specifically, first, an initial coordinate value of the target corner point is expressed as (x)i0,yi0) Wherein i 1,2.. 8, i corresponds to the respective vertex coordinates, and 1-8 correspond to P, respectivelyleft_up,Pright_up,Pleft_down,Pleft_down,Cup,Cdown,Cleft,Cright
Then, it is determined whether the target pixel value of each target corner point is a designated pixel value, which may be 255, in the reference luminance image. Representing a target pixel value as Ii(x, y), adding an initial coordinate value of a target corner of which the target pixel value is a designated pixel value to a preset coordinate set, which is expressed as S in this embodimenti(x),Si(y), wherein i ═ 1,2.. 8, where x ∈ [ max (x)i0-a,0),min(xi0-a,w)],y∈[max(yi0-a,0),min(yi0-a,h)]A is a set fixed constant value, the range of the value of a is 0-255, and w and h are the number of horizontal and vertical pixels of the reference luminance image.
Further, based on the Si(x),Si(y) calculating absolute vertex and boundary midpoint position coordinate values (x)i1,yi1) Wherein i 1,2.. 8 correspond to an absolute vertex corner point and a boundary center corner point P ', respectively'left_up,Pright_up,P′left_down,P′right_down,C′up,C′down,C′left,C′right
Screening out the maximum value and the minimum value in the x-axis direction and the y-axis direction based on the initial coordinate values in the preset coordinate set; and determining correction coordinate values of at least four corresponding absolute corner points according to the maximum value and the minimum value in the x-axis direction and the y-axis direction. In this embodiment, the result of determining the coordinates of each corner point and the corrected coordinate values thereof is as follows:
the coordinates of the angular point of the upper left absolute vertex and the correction coordinate values thereof are as follows: (x)11,y11)=(min(S1(x)),min(S1(y)));
The coordinates of the angular point of the upper right absolute vertex and the correction coordinate values thereof are as follows: (x)21,y21)=(max(S2(x)),min(S2(y)));
The coordinates of the angular point of the lower left absolute vertex and the correction coordinate values thereof are as follows: (x)31,y31)=(min(S3(x)),max(S3(y)));
The coordinates of the corner point of the lower right absolute vertex and the correction coordinate values thereof are as follows: (x)41,y41)=(max(S4(x)),max(S4(y)));
Coordinates of absolute central point of upper boundary and correction coordinate values: (x)51,y51)=(x50,min(S5(y)));
Coordinates of absolute central point of lower boundary and correction coordinate values: (x)61,y61)=(x60,max(S2(y)));
Coordinates of absolute central point of left boundary and correction coordinate values: (x)71,y71)=(min(S7(x)),y70);
Coordinates of the absolute center point of the right boundary and correction coordinate values thereof: (x)81,y81)=(max(S8(x)),y80);
Thus, the coordinates of each absolute vertex can be obtained by processing according to the above flow. Continuing with FIG. 3, P in FIG. 31Representing the upper absolute corner, C, of the reference luminance image1Indicating the right boundary absolute center point of the reference luminance image.
Further, step S203 is performed: and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation.
Specifically, a plurality of initial sub-regions are extracted from the reference luminance image based on the at least four target corner points; referring to fig. 6, fig. 6 is a schematic diagram of a first scene of a second embodiment of the method for correcting a display image according to the present invention. Dividing the reference luminance image into four initial regions I, II, III and IV based on each corner point, determining a union of the initial region I and the initial region III as a first initial sub-region, determining a union of the initial region II and the initial region IV as a second initial sub-region, determining a union of the initial region I and the initial region IV as a third initial sub-region, and determining a union of the initial region III and the initial region IV as a fourth initial sub-region.
Further, extracting a plurality of basic sub-regions corresponding to the plurality of initial sub-regions from a region surrounded by the at least four absolute corner points; four basic regions of the reference luminance image subjected to perspective transformation processing are constructed based on the respective absolute corner points. In this embodiment, the reference luminance image subjected to perspective transformation has the same large size as the reference luminance image, the width and the height of the reference luminance image subjected to perspective transformation are respectively marked as h and w, and absolute vertex corners of the reference luminance image subjected to perspective transformation are respectively constructed at vertex corners of the reference luminance image, wherein an upper-left absolute vertex, an upper-right absolute vertex, a lower-left absolute vertex and a lower-right absolute vertex can be respectively expressed as (b, b), (w-b, b), (b, h-b), (w-b, h-b), and b is a fixed value. The rectangle surrounded by the vertexes of the absolute corner points represents the corresponding region after the display region in the reference luminance image is subjected to perspective transformation, so that the irregular display region in the reference luminance image can be mapped into a regular rectangular region.
In this embodiment, the absolute corner points include an absolute vertex corner point and an absolute boundary center point corner point. And dividing the standard brightness image subjected to perspective transformation into four basic regions based on a connecting line of the upper boundary absolute central point and the lower boundary absolute central point and a connecting line of the left boundary absolute central point and the right boundary absolute central point. Specifically, as shown in FIG. 6, the four basic regions are I ', II', III ', IV', respectively. Further, based on the divided basic regions, extracting a plurality of sub-regions corresponding to the plurality of sub-images to be transformed, continuing with fig. 6, based on the four basic regions i ', ii', iii ', iv', determining a union of the basic region i 'and the basic region iii' as a first initial sub-region, determining a union of the basic region ii 'and the basic region iv' as a second basic sub-region, determining a union of the basic region i 'and the basic region iv' as a third basic sub-region, and determining a union of the basic region iii 'and the basic region iv' as a fourth basic sub-region. It will be appreciated that a greater or lesser number of sub-regions may be determined in other embodiments.
Further, determining a first characteristic coordinate value of each initial sub-area based on the initial coordinate value, and determining a second characteristic coordinate value of each basic sub-area based on the correction coordinate value; generally, it is necessary to determine the vertex corner feature coordinate values of 4 vertex corners of each sub-region. For example, let the first characteristic value of the first initial sub-region be (x)min,ymin),(xmid,ymin),(xmax,ymid),(xmax,ymin). For another example, the first characteristic value of the first basic subregion is (min (S)1(x)),min(S1(y))),(x50,min(S5(y))),(min(S3(x)),max(S3(y))),(x60,max(S2(y)))。
Determining a plurality of partition perspective transformation matrixes respectively based on the first characteristic coordinate values and the second characteristic coordinate values corresponding to the first characteristic coordinate values; it can be understood that, since there is a slight difference in coordinate values between the plurality of initial sub-regions extracted from the reference luminance image and the plurality of basic sub-regions extracted from the constructed reference luminance image subjected to the perspective transformation process, there is also a slight difference in the corresponding plurality of partition perspective transformation matrices. In this embodiment, the first partition perspective transformation matrix of the first initial sub-region and the first basic sub-region may be represented as H1(ii) a Representing the second initial sub-region and the second partition perspective transformation matrix of the second basic sub-region as H2(ii) a Representing the third initial sub-region and a third sub-region perspective transformation matrix of the third basic sub-region as H3(ii) a The fourth initial sub-region and the second sub-region of the second basic sub-region are represented as H4
And transforming the corresponding initial sub-regions based on the partition perspective transformation matrixes respectively, and obtaining the reference brightness image subjected to perspective transformation after the transformation of each initial sub-region is finished. In this embodiment, the conversion order may be set in advance, for exampleE.g. according to H1、H2、H3、H4And sequentially carrying out perspective transformation to sequentially obtain the reference brightness images of which the partitions are subjected to the perspective transformation. In addition, a whole-region perspective transformation matrix H of the reference luminance image and the reference luminance image subjected to perspective transformation processing can be obtained5And after the partition perspective transformation is finished, performing whole-region perspective transformation to obtain a reference brightness image of the corresponding whole region subjected to perspective transformation. It is understood that the reference luminance image after the perspective transformation process is obtained after the transformation of each initial sub-region is completed.
In this embodiment, by using the above scheme, at least four corner points in the reference luminance image are detected, and the at least four corner points are filtered according to a preset flow to obtain at least four target corner points; determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and pixels of the at least four target corner points in the reference brightness image; and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation. Therefore, the accuracy of the reference luminance image subjected to perspective conversion processing is improved through the partition perspective correction, and the display effect of the image is improved.
In addition, the embodiment also provides a correction device for the display image. Referring to fig. 7, fig. 7 is a functional block diagram of a correction device for displaying an image according to a first embodiment of the present invention.
In this embodiment, the display image correction device is a virtual device, and is stored in the memory 1005 of the display image correction apparatus shown in fig. 1, so as to implement all functions of the display image correction program: the method comprises the steps of extracting a reference brightness image from an image to be detected; extracting feature points of the reference luminance image; the perspective transformation processing is carried out on the reference brightness image based on the characteristic point; and the correction processing module is used for performing correction processing on the standard brightness image subjected to perspective transformation processing to obtain a corresponding corrected image.
Specifically, the correction device for a display image includes:
the first extraction module 10 is configured to extract a reference luminance image from an image to be detected;
a second extraction module 20, configured to extract feature points of the reference luminance image;
a perspective transformation module 30, configured to perform perspective transformation processing on the reference luminance image based on the feature point;
and the correcting module 40 is configured to perform correction processing on the reference luminance image subjected to perspective transformation processing to obtain a corresponding corrected image.
Further, the perspective transformation module is further configured to:
detecting at least four corner points in the reference brightness image, and filtering the at least four corner points according to a preset flow to obtain at least four target corner points;
determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values of the at least four target corner points in the reference brightness image;
and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation.
Further, the perspective transformation module is further configured to:
extracting a plurality of initial sub-regions from the reference luminance image based on the at least four target corner points;
extracting a plurality of basic sub-regions from a region surrounded by the at least four absolute corner points, wherein the basic sub-regions correspond to the initial sub-regions;
determining a first characteristic coordinate value of each initial subregion based on the initial coordinate values of at least four target corner points, and determining a second characteristic coordinate value of each basic subregion based on the correction coordinate values of at least four absolute corner points;
respectively substituting the first characteristic coordinate values and the second characteristic coordinate values corresponding to the first characteristic coordinate values into a perspective transformation formula to determine a plurality of partition perspective transformation matrixes;
and transforming the corresponding initial sub-regions based on the partition perspective transformation matrixes respectively, and obtaining the reference brightness image subjected to perspective transformation after the transformation of each initial sub-region is finished.
Further, the perspective transformation module is further configured to:
acquiring pixel values of all target corner points in the reference brightness image, and marking coordinate values of the target corner points with the pixel values as preset values as initial coordinate values;
determining the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction according to the initial coordinate values;
and determining correction coordinate values of at least four absolute corner points according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
Further, the first extraction module is further configured to:
extracting a reference brightness image displayed by a preset pixel order from an image to be detected;
and marking the pixel value of each point in the reference brightness image as a preset pixel value or 0 to obtain the reference brightness image.
Further, the correction module is further configured to:
detecting and acquiring the outline of the reference brightness image subjected to perspective transformation;
and performing correction processing based on the outline of the reference brightness image subjected to perspective transformation processing.
Further, the correction module is further configured to:
and carrying out distortion correction on the reference brightness image subjected to perspective transformation processing and falling in the minimum bounding rectangle to obtain a corrected image for displaying in a target display area.
Further, the correction module is further configured to:
and correcting the outline of the reference brightness image subjected to perspective transformation processing by a correction factor.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a display image correction program is stored on the computer-readable storage medium, and when the display image correction program is executed by a processor, the steps of the display image correction method are implemented as described above, which are not described herein again.
Compared with the prior art, the method, the device and the computer readable storage medium for correcting the display image provided by the invention comprise the following steps: extracting a reference brightness image of an image to be detected, and acquiring a reference brightness image of the reference brightness image; extracting characteristic points of the reference brightness image, and performing perspective transformation processing on the reference brightness image based on the characteristic points to obtain a reference brightness image subjected to perspective transformation processing; and performing correction processing on the reference brightness image subjected to perspective transformation processing to obtain a corrected image for displaying in a target display area. Therefore, binarization, perspective transformation and correction processing are carried out on the image to be detected, an image capable of completely filling the display panel is obtained, and the display effect of the image is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a readable storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for causing a terminal device to execute the method according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.

Claims (10)

1. A method of correcting a displayed image, the method comprising:
extracting a reference brightness image from an image to be detected;
extracting characteristic points of the reference brightness image;
performing perspective transformation processing on the reference luminance image based on the feature point;
and correcting the standard brightness image subjected to perspective transformation to obtain a corresponding corrected image.
2. The method according to claim 1, wherein the subjecting the reference luminance image to perspective transformation processing based on the feature point includes:
detecting at least four corner points in the reference brightness image, and filtering the at least four corner points according to a preset flow to obtain at least four target corner points;
determining correction coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values of the at least four target corner points in the reference brightness image;
and determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference brightness image based on the perspective transformation matrix to obtain the reference brightness image subjected to perspective transformation.
3. The method according to claim 2, wherein the determining a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transforming the reference luminance image based on the perspective transformation matrix to obtain a reference luminance image subjected to perspective transformation processing comprises:
extracting a plurality of initial sub-regions from the reference luminance image based on the at least four target corner points;
extracting a plurality of basic sub-regions from a region surrounded by the at least four absolute corner points, wherein the basic sub-regions correspond to the initial sub-regions;
determining a first characteristic coordinate value of each initial subregion based on the initial coordinate values of at least four target corner points, and determining a second characteristic coordinate value of each basic subregion based on the correction coordinate values of at least four absolute corner points;
respectively substituting the first characteristic coordinate values and the second characteristic coordinate values corresponding to the first characteristic coordinate values into a perspective transformation formula to determine a plurality of partition perspective transformation matrixes;
and transforming the corresponding initial sub-regions based on the partition perspective transformation matrixes respectively, and obtaining the reference brightness image subjected to perspective transformation after the transformation of each initial sub-region is finished.
4. The method according to claim 2, wherein determining the corrected coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and the pixel values thereof in the reference luminance image comprises:
acquiring pixel values of all target corner points in the reference brightness image, and marking coordinate values of the target corner points with the pixel values as preset values as initial coordinate values;
determining the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction according to the initial coordinate values;
and determining correction coordinate values of at least four absolute corner points according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
5. The method according to claim 1, wherein the extracting the reference luminance image of the image to be detected comprises:
extracting a reference brightness image displayed by a preset pixel order from an image to be detected;
the reference brightness image of the image to be detected is extracted, and then the method further comprises the following steps:
and marking the pixel value of each point in the reference brightness image as a preset pixel value or 0 to obtain a binary image of the reference brightness image.
6. The method according to any one of claims 1 to 5, wherein the performing of the correction process on the reference luminance image subjected to the perspective transformation process includes:
detecting and acquiring the outline of the reference brightness image subjected to perspective transformation;
and performing correction processing based on the outline of the reference brightness image subjected to perspective transformation processing.
7. The method according to claim 6, wherein the performing correction processing based on the contour of the reference luminance image subjected to perspective transformation processing includes:
and carrying out distortion correction on the reference brightness image subjected to perspective transformation processing and falling in the minimum bounding rectangle to obtain a corrected image for displaying in a target display area.
8. The method according to claim 6, wherein the performing correction processing based on the contour of the reference luminance image subjected to perspective transformation processing includes:
and correcting the outline of the reference brightness image subjected to perspective transformation processing by a correction factor.
9. A correction device for a display image, characterized in that it comprises a processor, a memory and a correction program for a display image stored in said memory, which when executed by said processor implements the steps of the correction method for a display image according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a correction program for a display image is stored, the correction program for a display image realizing the steps of the correction method for a display image according to any one of claims 1 to 7 when executed by a processor.
CN202011167154.1A 2020-10-27 2020-10-27 Method and apparatus for correcting display image, and computer-readable storage medium Pending CN112308794A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011167154.1A CN112308794A (en) 2020-10-27 2020-10-27 Method and apparatus for correcting display image, and computer-readable storage medium
PCT/CN2021/124839 WO2022089263A1 (en) 2020-10-27 2021-10-20 Display image correction method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167154.1A CN112308794A (en) 2020-10-27 2020-10-27 Method and apparatus for correcting display image, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112308794A true CN112308794A (en) 2021-02-02

Family

ID=74331105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167154.1A Pending CN112308794A (en) 2020-10-27 2020-10-27 Method and apparatus for correcting display image, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112308794A (en)
WO (1) WO2022089263A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539162A (en) * 2021-07-02 2021-10-22 深圳精智达技术股份有限公司 Image capturing method and device of display panel
WO2022089263A1 (en) * 2020-10-27 2022-05-05 深圳Tcl数字技术有限公司 Display image correction method and device, and computer-readable storage medium
CN114445825A (en) * 2022-02-07 2022-05-06 北京百度网讯科技有限公司 Character detection method and device, electronic equipment and storage medium
CN114927090A (en) * 2022-05-30 2022-08-19 卡莱特云科技股份有限公司 Method, device and system for sorting light points in special-shaped LED display screen

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115276A (en) * 2023-01-12 2023-11-24 荣耀终端有限公司 Picture processing method, device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203433A (en) * 2016-07-13 2016-12-07 西安电子科技大学 In a kind of vehicle monitoring image, car plate position automatically extracts and the method for perspective correction
US10573040B2 (en) * 2016-11-08 2020-02-25 Adobe Inc. Image modification using detected symmetry
CN107169494B (en) * 2017-06-01 2018-07-20 中国人民解放军国防科学技术大学 License plate image based on handheld terminal divides bearing calibration
CN110060200B (en) * 2019-03-18 2023-05-30 创新先进技术有限公司 Image perspective transformation method, device and equipment
CN110097054A (en) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 A kind of text image method for correcting error based on image projection transformation
CN112308794A (en) * 2020-10-27 2021-02-02 深圳Tcl数字技术有限公司 Method and apparatus for correcting display image, and computer-readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089263A1 (en) * 2020-10-27 2022-05-05 深圳Tcl数字技术有限公司 Display image correction method and device, and computer-readable storage medium
CN113539162A (en) * 2021-07-02 2021-10-22 深圳精智达技术股份有限公司 Image capturing method and device of display panel
CN114445825A (en) * 2022-02-07 2022-05-06 北京百度网讯科技有限公司 Character detection method and device, electronic equipment and storage medium
CN114927090A (en) * 2022-05-30 2022-08-19 卡莱特云科技股份有限公司 Method, device and system for sorting light points in special-shaped LED display screen
CN114927090B (en) * 2022-05-30 2023-11-28 卡莱特云科技股份有限公司 Method, device and system for ordering lamp points in special-shaped LED display screen

Also Published As

Publication number Publication date
WO2022089263A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN112308794A (en) Method and apparatus for correcting display image, and computer-readable storage medium
CN108760767B (en) Large-size liquid crystal display defect detection method based on machine vision
US7792386B1 (en) Using difference kernels for image filtering
CN107507558B (en) Correction method of LED display screen
CN101102515B (en) Apparatus and method for correcting edge in an image
CN107993263B (en) Automatic calibration method for panoramic system, automobile, calibration device and storage medium
CN110223226B (en) Panoramic image splicing method and system
EP1638345A1 (en) Method for calculating display characteristic correction data, program for calculating display characteristic correction data, and device for calculating display characteristic correction data
JP2019008286A (en) Projection system and display image correction method
WO2001047285A1 (en) Method and apparatus for calibrating projector-camera system
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
JP6750500B2 (en) Information processing apparatus and recognition support method
CN109803172B (en) Live video processing method and device and electronic equipment
CN110736610B (en) Method and device for measuring optical center deviation, storage medium and depth camera
WO2020013105A1 (en) Object detection device and object detection method for construcion machine
CN116912233B (en) Defect detection method, device, equipment and storage medium based on liquid crystal display screen
JP2019220887A (en) Image processing system, image processing method, and program
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN113658144B (en) Method, device, equipment and medium for determining geometric information of pavement diseases
CN115995208B (en) Lamp positioning method, correction method and device for spherical LED display screen
CN108734666B (en) Fisheye image correction method and device
CN112950485A (en) Color card, image color difference processing method and device, electronic equipment and storage medium
CN115953981A (en) Method for positioning special-shaped plane screen lamp points and method for acquiring brightness information
CN114727073B (en) Image projection method and device, readable storage medium and electronic equipment
CN108596981B (en) Aerial view angle re-projection method and device of image and portable terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination