WO2022089263A1 - 显示图像的校正方法、设备及计算机可读存储介质 - Google Patents

显示图像的校正方法、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022089263A1
WO2022089263A1 PCT/CN2021/124839 CN2021124839W WO2022089263A1 WO 2022089263 A1 WO2022089263 A1 WO 2022089263A1 CN 2021124839 W CN2021124839 W CN 2021124839W WO 2022089263 A1 WO2022089263 A1 WO 2022089263A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
perspective transformation
corner points
correction
value
Prior art date
Application number
PCT/CN2021/124839
Other languages
English (en)
French (fr)
Inventor
杨剑锋
陈林
夏大学
Original Assignee
深圳Tcl数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl数字技术有限公司 filed Critical 深圳Tcl数字技术有限公司
Publication of WO2022089263A1 publication Critical patent/WO2022089263A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Definitions

  • the present invention relates to the technical field of image processing, and in particular, to a correction method, device and computer-readable storage medium for displaying images.
  • the display panel displays the display image captured by the camera
  • the display image displayed on the display panel will be deformed to a certain extent due to the tilt of the display panel, the deviation of the camera shooting angle, and the distortion of the camera lens, which will make the display effect. not good.
  • the present invention provides a correction method, device and computer-readable storage medium for a displayed image, aiming at improving the display effect of the displayed image.
  • the present invention provides a correction method for a displayed image, the method comprising:
  • Correction processing is performed on the reference brightness image subjected to the perspective transformation processing to obtain a corresponding corrected image.
  • performing perspective transformation processing on the reference brightness image based on the feature points includes:
  • a perspective transformation matrix is determined based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and the reference brightness image is transformed based on the perspective transformation matrix to obtain a perspective transformation processed image.
  • Baseline luminance image is determined based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and the reference brightness image is transformed based on the perspective transformation matrix to obtain a perspective transformation processed image.
  • the perspective transformation matrix is determined based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and the reference brightness image is transformed based on the perspective transformation matrix,
  • the steps of obtaining the reference luminance image processed by perspective transformation include:
  • the corresponding initial sub-regions are transformed based on the perspective transformation matrices of the respective partitions, and after the transformation of each initial sub-region is completed, a reference luminance image that has undergone perspective transformation processing is obtained.
  • the step of determining the corrected coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and their pixel values in the reference brightness image includes:
  • Correction coordinate values of at least four absolute corner points are determined according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
  • the extraction of the reference brightness image of the image to be detected further includes:
  • the pixel value of each point in the reference luminance image is marked as a preset pixel value or 0 to obtain a binarized image of the reference luminance image.
  • the performing correction processing on the reference brightness image subjected to the perspective transformation process includes:
  • Correction processing is performed based on the outline of the reference luminance image subjected to the perspective transformation process.
  • the performing correction processing based on the contour of the reference brightness image subjected to the perspective transformation process includes:
  • Distortion correction is performed on the reference luminance image that has undergone the perspective transformation process and falls within the minimum circumscribed rectangle to obtain a corrected image for display in the target display area.
  • the performing correction processing based on the contour of the reference brightness image subjected to the perspective transformation process includes:
  • the contour of the reference luminance image subjected to the perspective transformation process is corrected by a correction factor.
  • the present invention also provides a correction device for a displayed image, which includes a processor, a memory, and a correction program for the displayed image stored in the memory.
  • a correction program for the displayed image stored in the memory.
  • the present invention also provides a computer-readable storage medium, on which a correction program for displaying an image is stored, and the correction program for displaying an image is implemented as described above when the processor is run. Describe the steps of the correction method of the displayed image.
  • the present invention provides a correction method, device and computer-readable storage medium for a displayed image.
  • the method includes: extracting a reference luminance image from an image to be detected; extracting feature points of the reference luminance image, based on The feature point performs perspective transformation processing on the reference brightness image; performs calibration processing on the reference brightness image subjected to the perspective transformation processing to obtain a corresponding corrected image. Therefore, perspective transformation and correction processing are performed on the image to be detected, so as to obtain an image that can completely fill the display panel, and the display effect of the image is improved.
  • FIG. 1 is a schematic diagram of the hardware structure of a correction device for displaying images involved in various embodiments of the present invention
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for correcting a displayed image according to the present invention
  • FIG. 3 is a schematic diagram of a first scene of the first embodiment of the method for correcting a displayed image according to the present invention
  • FIG. 4 is a schematic diagram of a second scene of the first embodiment of the method for correcting a displayed image according to the present invention.
  • FIG. 5 is a schematic flowchart of a second embodiment of a method for correcting a displayed image according to the present invention.
  • FIG. 6 is a schematic diagram of a first scene of a second embodiment of the method for correcting a displayed image according to the present invention.
  • FIG. 7 is a schematic diagram of functional modules of the first embodiment of the display image correction apparatus of the present invention.
  • the correction device for displaying an image mainly involved in the embodiments of the present invention refers to a network connection device capable of realizing network connection, and the correction device for displaying an image may be a server, a cloud platform, or the like.
  • FIG. 1 is a schematic diagram of a hardware structure of a correction device for displaying an image according to various embodiments of the present invention.
  • a correction device for displaying an image may include a processor 1001 (for example, a central processing unit, Central Processing Unit, CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005.
  • the communication bus 1002 is used to realize the connection communication between these components; the input port 1003 is used for data input; the output port 1004 is used for data output, and the memory 1005 can be a high-speed RAM memory or a non-volatile memory (non-volatile memory).
  • the memory 1005 may optionally also be a storage device independent of the aforementioned processor 1001 .
  • the hardware structure shown in FIG. 1 does not constitute a limitation of the present invention, and may include more or less components than those shown in the drawings, or combine some components, or arrange different components.
  • the memory 1005 as a readable storage medium in FIG. 1 may include an operating system, a network communication module, an application program module, and a correction program for displaying images.
  • the network communication module is mainly used to connect to the server and perform data communication with the server; and the processor 1001 can call the correction program of the displayed image stored in the memory 1005, and execute the correction method of the displayed image provided by the embodiment of the present invention .
  • An embodiment of the present invention provides a correction method for a displayed image.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for correcting a displayed image according to the present invention.
  • the display image correction method is applied to a display image correction device, and the method includes:
  • Step S101 extracting a reference brightness image from the image to be detected
  • Step S102 extracting the feature points of the reference brightness image
  • Step S103 performing perspective transformation processing on the reference brightness image based on the feature points
  • Step S104 performing correction processing on the reference brightness image subjected to the perspective transformation processing to obtain a corresponding corrected image.
  • the display image can be displayed on at least an LCD (Liquid Crystal Display, liquid crystal display) or a Mini LED (Light Emitting Diode, light-emitting diode) display screen,
  • LCD Liquid Crystal Display, liquid crystal display
  • Mini LED Light Emitting Diode, light-emitting diode
  • the display image involved in the step S101 may be a color image in an RGB (red green blue, red green blue) color mode, and the display image has different gray scales.
  • Grayscale refers to the brightness level relationship between the darkest black and the brightest white of the display. It is the performance of light-dark contrast and black-and-white color transition. The clearer the image and the more natural the transition, the better.
  • the grayscale mainly adopts 32-level grayscale and 256-level grayscale.
  • the steps include: extracting a reference brightness image displayed at a preset pixel level from the image to be detected;
  • the pixel value of each point in the reference luminance image is marked as a preset pixel value or 0 to obtain a binarized image of the reference luminance image.
  • the imaging data of the industrial camera is converted into an image to be detected in an image data format.
  • a reference luminance image displayed by a specified pixel grayscale is extracted from the to-be-detected image. Obtain the pixels of each pixel, mark the pixel value of the pixel whose pixel is greater than the preset pixel value as the specified pixel value, and mark the pixel value of the pixel whose pixel is less than or equal to the preset pixel value as 0.
  • the pixel value of each point in the reference luminance image is marked as a specified pixel value or 0, and the binarization parameter is set to 0.25 ⁇ max(I), where max(I) is the maximum pixel value.
  • the reference luminance image of the reference luminance image can be obtained.
  • the specified pixels may be gray-scale pixels such as 255 and 32, and the corresponding max(I) is 255 and 32.
  • the display area of the reference luminance image is roughly positioned to obtain a more accurate binarized image of the reference luminance image after coarse positioning.
  • the binarized contour of the reference luminance image is detected, and the contour area of the binarized contour is calculated.
  • the binarized contour can be extracted based on the tensorflow neural convolutional network.
  • the corresponding binarized contour is marked is a valid binarized contour.
  • a minimum circumscribed rectangle is extracted from the effective binarization outline, and then expanded based on the coordinates of the minimum circumscribed rectangle to obtain a rough-positioned reference luminance image.
  • the minimum bounding rectangle refers to the maximum range of several two-dimensional shapes (such as points, lines, and polygons) represented by two-dimensional coordinates, that is, the maximum abscissa, the minimum value of each vertex in a given two-dimensional shape.
  • FIG. 3 is a schematic diagram of the first scene of the first embodiment of the display image correction method according to the present invention.
  • the effective binarization contour is represented as a solid line frame a.
  • the minimum circumscribed rectangle extracted from is the dotted rectangle frame b in Figure 3
  • the outline after rough positioning is the dotted frame c in Figure 3
  • the size of the preset value is related to the minimum circumscribed rectangle and the outline after the rough positioning,
  • the size of the preset value may be d as shown in FIG. 3 .
  • Mini LED display is composed of countless independent and separated Mini LED lamp beads.
  • Mini LED which is different from the uniform lighting of the entire LCD area
  • the Mini LED display is partially discrete lamp beads, and countless such lamp beads form an overall luminous Mini LED display. Therefore, graphics processing is required after adaptive binarization processing.
  • the specific process of the graphics is to first construct the expansion convolution kernel, the size of the convolution kernel can be set to (20, 20), the expansion type can be an elliptical structure, and then the expansion processing is used to expand the discrete regions of the lamp beads to the same size. Display the continuous display area corresponding to the image.
  • steps S102 to S103 are performed: extracting feature points of the reference luminance image; and performing perspective transformation processing on the reference luminance image based on the feature points.
  • the reference luminance image in the steps S102-S103 refers to a binarized image of the reference luminance image.
  • Perspective transformation refers to the use of the condition that the perspective center, the image point and the target point are collinear, according to the law of perspective rotation, the bearing surface (perspective surface) is rotated around the trace (perspective axis) by a certain angle, destroying the original Some projection beams can still keep the same transformation of the projected geometry on the shadow-bearing surface.
  • [x 0 , y 0 , z 0 ] represents the initial coordinate value of the reference luminance image
  • [x 1 , y 1 , z 1 ] represents the corrected coordinate value of the corrected preliminary corrected image
  • a perspective transformation matrix can be determined according to the coordinate values of a plurality of corresponding points before and after correction, and then the reference brightness image to be transformed can be transformed based on the perspective transformation matrix.
  • a corner detection method may be used to extract multiple corner points of the reference brightness image, and then multiple feature points of the reference brightness image may be extracted from the multiple corner points, and the multiple feature points may be A plurality of feature points among the eight points, such as the corner points of the four corners of the reference brightness image and the midpoint of the four sides of the edge.
  • the coordinate values of the corrected preliminary corrected image are determined based on the multiple feature points. It can be understood that, ideally, the display image displayed on the display panel is a rectangular image of a certain size, and the rectangular image is basically the same size as the reference brightness image with only slight differences.
  • the coordinate values of the plurality of feature points of the luminance image determine the coordinate values of the corrected preliminary corrected image.
  • the maximum x value x max , the minimum x value x min , the maximum y value y max , and the minimum y value y min in the x and y axis directions among the coordinate values of the plurality of feature points are obtained. Then, the coordinate values of the rectified preliminary rectified image are determined as (x min , y max ), (x min , y min ), (x max , y min ), (x max , y max ).
  • a perspective transformation matrix can be determined based on the coordinate values of a plurality of feature points of the reference brightness image and the coordinate values of the corrected preliminary corrected image, and the reference brightness image to be transformed can be processed based on the perspective transformation matrix. Transform to obtain a corrected image.
  • a perspective transformation matrix can be determined based on the coordinate values of at least four corner points of the reference brightness image and the corrected coordinate values of the reference brightness image subjected to perspective transformation, and a perspective transformation matrix can be treated based on the perspective transformation matrix.
  • the transformed reference luminance image is transformed to obtain a reference luminance image subjected to perspective transformation processing.
  • the reference luminance image subjected to perspective transformation processing is a perspective projection image of a binarized image of the reference luminance image.
  • step S104 is performed: performing correction processing on the reference brightness image subjected to the perspective transformation process to obtain a corresponding corrected image.
  • the contour of the reference luminance image subjected to the perspective transformation processing is detected and acquired, and correction processing is performed based on the contour of the reference luminance image subjected to the perspective transformation processing.
  • the extraction of contours is implemented based on the tensorflow neural convolutional network.
  • the contour area of the contour is calculated.
  • the corresponding contour is marked as valid. contour. and further extracting the minimum circumscribed rectangle of the contour to obtain a reference brightness image that is within the minimum circumscribed rectangle and undergoes perspective transformation processing.
  • the contour of the perspective-transformed reference luminance image can also be corrected by a correction factor to obtain a perspective-transformed reference luminance image.
  • the upper left vertex and the lower right vertex of the outline are determined as (x 1 , y 1 ), (x 2 , y 2 ), and it is determined whether the pixels of each pixel on each edge of the horizontal positioning rectangle are equal to a specific pixel value,
  • the specific pixel value in this embodiment may be 255.
  • the horizontally positioned rectangular constant width line includes an upper edge line, a lower edge line, a left line and a right line.
  • the coordinate value of the upper edge line is expressed as I up ([x 1 , x 2 ], d 1 ), and the initial value of d 1 is y 1 , and it is determined whether each pixel of the upper edge line exists or not.
  • the pixel value is a pixel point with a specific pixel value. If there are one or more pixel points with a specific pixel value in each pixel point of the upper edge line, the correction factor of the upper edge line is determined as d 1 -y 1 , if If there is no one or more pixel points with a specific pixel value in each pixel point of the upper edge line, the correction factor of the upper edge line is determined as d 1 -1.
  • the correction factor of the lower line is determined as y 2 -d 2 , if each of the lower line If one or more pixel points with a specific pixel value do not exist in the pixel points, the correction factor of the lower edge line is determined as d 2 -1.
  • the coordinate value of the left line is represented as I left (d 3 , [y 1 , y 2 ]), the initial value of d 3 is x 1 , and it is judged whether there is a pixel value in each pixel of the left line with a specific pixel value If there are one or more pixel points with a specific pixel value in each pixel point of the left line, the correction factor of the left line is determined as d 3 -x 1 , if each of the left line If one or more pixel points with a specific pixel value do not exist in the pixel points, the correction factor of the left line is determined as d 3 +1.
  • the coordinate value of the right line is represented as I rightt (d 4 , [y 1 , y 2 ]), the initial value of d 3 is x 1 , and it is judged whether there is a pixel value in each pixel of the right line with a specific pixel value If there are one or more pixel points with a specific pixel value in each pixel point of the right line, the correction factor of the right line is determined as x 2 -d 4 , if each of the right line If one or more pixel points with a specific pixel value do not exist in the pixel points, the correction factor of the right line is determined as d 4 -1.
  • the captured image may have radial distortion of barrel distortion.
  • correction is performed on the reference brightness image that has undergone perspective transformation and is within the minimum circumscribed rectangle based on a division model. Specifically, each edge contour of the contour is extracted by using the fast arc extraction method to obtain the arc corresponding to each edge, and the parameters of each arc are calculated respectively; The transformed reference luminance image is used as the center to delineate the pre-selected area of the distortion center.
  • the distortion coefficients of each arc corresponding to each pixel in the pre-selected area of the distortion center are calculated as the distortion center.
  • Count the value concentration intervals of the distortion coefficients of each arc corresponding to each pixel point as the distortion center count the number of distortion coefficients in each value concentration interval, and calculate all the values in the value concentration interval corresponding to each pixel point as the distortion center.
  • FIG. 4 is a schematic diagram of the second scene of the first embodiment of the display image correction method of the present invention.
  • the reference brightness image (the right side of FIG. 4) obtained after the perspective transformation after the perspective transformation process is not a complete rectangle. After correction, a complete rectangle can be obtained (left side of Figure 4).
  • a reference luminance image is extracted from the image to be detected through the above solution; feature points of the reference luminance image are extracted, and perspective transformation processing is performed on the reference luminance image based on the feature points;
  • the brightness image is calibrated to obtain a corresponding corrected image. Therefore, perspective transformation and correction processing are performed on the image to be detected, so as to obtain an image that can completely fill the display panel, and the display effect of the image is improved.
  • a second embodiment of the present invention proposes a correction method for a displayed image.
  • the reference brightness image is calibrated based on the at least four feature points.
  • Perspective transformation the steps of obtaining the reference brightness image processed by perspective transformation include:
  • Step S201 Detect at least four corner points in the reference brightness image, filter the at least four corner points according to a preset process, and obtain at least four target corner points;
  • Step S202 Determine the corrected coordinate values of at least four absolute corner points according to the initial coordinate values of the at least four target corner points and their pixel values in the reference brightness image;
  • Step S203 Determine a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, transform the reference brightness image based on the perspective transformation matrix, and obtain a perspective transformation matrix.
  • the transformed reference luminance image is
  • Harris corner extraction algorithm is a corner extraction algorithm through autocorrelation matrix developed by Chris Harris and Mike Stephens on the basis of H. Moravec algorithm, also known as Plessey algorithm. Harris corner extraction algorithm This operator is inspired by the number of autocorrelation surfaces in signal processing, and gives a matrix M associated with the autocorrelation function. The eigenvalue of the M matrix is the first-order curvature of the autocorrelation function. If both curvature values are high, then the point is considered to be a corner feature.
  • corner points Because the number of corner points is relatively large, and perspective transformation generally only requires 4 points. Therefore, it is also necessary to filter the corner points to obtain the target corner points. In this embodiment, filtering is performed based on the coordinate values of each corner point.
  • the distance calculation formula is calculated as follows (it is worth noting that the image In the coordinate axis, the upper left corner of the image is the coordinate origin):
  • the target corner includes a target vertex and a central corner.
  • the target vertex includes the upper left vertex corner, the upper right vertex, the lower left vertex, and the lower right vertex
  • the central corner includes: the upper boundary central corner, the lower boundary central corner, the left boundary center Corner, right border center corner.
  • P represents the upper right corner point of the reference luminance image
  • C represents the right border center point of the reference luminance image.
  • step S202 the corrected coordinate values of at least four absolute corners are determined according to the initial coordinate values of the at least four target corners and their pixels in the reference luminance image.
  • the absolute corner points are characteristic points of the reference brightness image subjected to the perspective transformation process, and the absolute corner points correspond to each target corner point in the reference brightness image.
  • the step S202 includes: acquiring the pixel value of each target corner point in the reference brightness image, marking the coordinate value of the target corner point whose pixel value is a preset value as an initial coordinate value; determining the x-axis according to the initial coordinate value The maximum value, the minimum value in the direction, and the maximum value and the minimum value in the y-axis direction; according to the maximum value and minimum value in the x-axis direction and the maximum value and minimum value in the y-axis direction, determine at least four absolute corner points. Correct the coordinate value.
  • the target pixel value of each target corner point is a specified pixel value, and the specified pixel value may be 255.
  • P1 represents the upper absolute corner point of the reference luminance image
  • C1 represents the absolute center point of the right boundary of the reference luminance image.
  • step S203 determine a perspective transformation matrix based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and transform the reference brightness image based on the perspective transformation matrix , and obtain the reference brightness image after perspective transformation.
  • FIG. 6 is a schematic diagram of a first scene of the second embodiment of the display image correction method of the present invention.
  • the reference luminance image is divided into four initial regions I, II, III, and IV based on each corner point, the union of the initial region I and the initial region III is determined as the first initial sub-region, and the initial region II and the initial region are The union of IV is determined as the second initial sub-region, the union of the initial region I and the initial region IV is determined as the third initial sub-region, and the union of the initial region III and the initial region IV is determined as the fourth initial sub-region.
  • the reference brightness image subjected to perspective transformation is the same as the reference brightness image in large size, and the width and height of the reference brightness image subjected to perspective transformation are marked as h and w, respectively.
  • the absolute vertices and corners of the reference brightness image subjected to the perspective transformation process are respectively constructed at the respective vertices and corners of the luminance image, wherein the upper left absolute vertex, the upper right absolute vertex, the lower left absolute vertex, and the lower right absolute vertex can be respectively expressed as (b, b), (w-b, b), (b, h-b), (w-b, h-b), where b is a fixed one set.
  • the rectangle enclosed by each absolute corner vertex represents the corresponding area of the display area in the reference luminance image after perspective transformation, so that the irregular display area in the reference luminance image can be mapped into a regular rectangular area.
  • the absolute corner points include absolute vertex corner points and absolute boundary center point corners.
  • the area enclosed by each of the absolute corner points is a quadrilateral, and then based on the connection line between the absolute center point of the upper boundary and the absolute center point of the lower boundary, and the connection line between the absolute center point of the left boundary and the absolute center point of the right boundary, the The reference luminance image subjected to perspective transformation is divided into four basic regions. Specifically, as shown in FIG. 6 , the four basic regions are I', II', III', and IV' respectively. Further, based on the divided basic regions, multiple sub-regions corresponding to the plurality of sub-images to be transformed are extracted. Continuing to refer to refer to FIG.
  • the union of the basic region I' and the basic region III' is determined as the first initial subregion
  • the union of the basic region II' and the basic region IV' is determined as the second basic subregion
  • the basic region I' and the basic region are determined.
  • the union of IV' is determined as the third basic sub-region
  • the union of the basic region III' and the basic region IV' is determined as the fourth basic sub-region. It will be understood that in other embodiments, a greater or lesser number of sub-regions may be determined.
  • the first characteristic coordinate value of each initial sub-area is determined based on the initial coordinate value
  • the second characteristic coordinate value of each basic sub-area is determined based on the corrected coordinate value; generally, it is necessary to determine 4 of each sub-area.
  • the first eigenvalues of the first initial subregion are (x min , y min ), (x mid , y min ), (x max , y mid ), (x max , y min ).
  • the first eigenvalue of the first basic sub-region as (min(S 1 (x)), min(S 1 (y))), (x 50 , min(S 5 (y))), ( min(S 3 (x)), max(S 3 (y))), (x 60 , max(S 2 (y))).
  • a plurality of partition perspective transformation matrices are determined based on the first feature coordinate value and its corresponding second feature coordinate value respectively; it can be understood that due to the multiple initial sub-regions extracted from the reference brightness image, the There are slight differences in the coordinate values between the multiple basic sub-regions extracted from the reference luminance image subjected to the perspective transformation process, so the corresponding multiple partition perspective transformation matrices also have slight differences.
  • the perspective transformation matrix of the first partition of the first initial sub-region and the first basic sub-region can be represented as H 1 ;
  • the perspective transformation matrix of the second partition of the second initial sub-region and the second basic sub-region can be represented as H 2 ;
  • the third partition perspective transformation matrix of the third initial sub-region and the third basic sub-region is represented as H 3 ;
  • the first partition perspective transformation matrix of the fourth initial sub-region and the first basic sub-region is represented as H 4 .
  • the corresponding initial sub-regions are transformed based on the perspective transformation matrix of each partition, and after the transformation of each initial sub-region is completed, a reference brightness image that has undergone perspective transformation processing can be obtained.
  • the transformation sequence may be preset, for example, perspective transformation is performed in sequence according to H 1 , H 2 , H 3 , and H 4 , and the reference brightness images whose partitions are subjected to perspective transformation processing are sequentially obtained.
  • the whole-area perspective transformation matrix H 5 of the reference luminance image and the reference luminance image subjected to the perspective transformation can also be obtained. Baseline luminance image. It can be understood that after the transformation of each initial sub-region is completed, a reference brightness image that has undergone perspective transformation can be obtained.
  • At least four corner points in the reference brightness image are detected, and the at least four corner points are filtered according to a preset process to obtain at least four target corner points; according to the at least four corner points
  • the initial coordinate values of the target corner points and their pixels in the reference luminance image determine the corrected coordinate values of at least four absolute corner points; based on the initial coordinate values of the at least four target corner points and the at least four absolute corner points
  • the corrected coordinate values of the corner points determine a perspective transformation matrix, and the reference brightness image is transformed based on the perspective transformation matrix to obtain a reference brightness image that has undergone perspective transformation. Therefore, through the divisional perspective correction, the accuracy of the reference brightness image subjected to the perspective transformation process is improved, which helps to improve the display effect of the image.
  • FIG. 7 is a schematic diagram of functional modules of the first embodiment of the display image correction apparatus of the present invention.
  • the display image correction device is a virtual device, which is stored in the memory 1005 of the display image correction device shown in FIG. 1 , so as to realize all functions of the display image correction program.
  • the correction device for the displayed image includes:
  • the first extraction module 10 is used for extracting the reference brightness image from the image to be detected
  • the second extraction module 20 is used for extracting the feature points of the reference luminance image
  • a perspective transformation module 30, configured to perform perspective transformation processing on the reference brightness image based on the feature points;
  • the correction module 40 is configured to perform correction processing on the reference brightness image subjected to the perspective transformation processing to obtain a corresponding corrected image.
  • perspective transformation module is also used for:
  • a perspective transformation matrix is determined based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and the reference brightness image is transformed based on the perspective transformation matrix to obtain a perspective transformation processed image.
  • Baseline luminance image is determined based on the initial coordinate values of the at least four target corner points and the corrected coordinate values of the at least four absolute corner points, and the reference brightness image is transformed based on the perspective transformation matrix to obtain a perspective transformation processed image.
  • perspective transformation module is also used for:
  • the corresponding initial sub-regions are transformed based on the perspective transformation matrices of the respective partitions, and after the transformation of each initial sub-region is completed, a reference luminance image that has undergone perspective transformation processing is obtained.
  • perspective transformation module is also used for:
  • Correction coordinate values of at least four absolute corner points are determined according to the maximum value and the minimum value in the x-axis direction and the maximum value and the minimum value in the y-axis direction.
  • the first extraction module is also used for:
  • the pixel value of each point in the reference luminance image is marked as a preset pixel value or 0 to obtain a reference luminance image.
  • correction module is also used for:
  • Correction processing is performed based on the outline of the reference luminance image subjected to the perspective transformation processing.
  • correction module is also used for:
  • Distortion correction is performed on the reference luminance image that has undergone the perspective transformation process and falls within the minimum circumscribed rectangle to obtain a corrected image for display in the target display area.
  • correction module is also used for:
  • the contour of the reference luminance image subjected to the perspective transformation process is corrected by a correction factor.
  • an embodiment of the present invention further provides a computer-readable storage medium, where a correction program for displaying an image is stored on the computer-readable storage medium, and when the correction program for displaying an image is run by a processor, the image displayed as described above is implemented The steps of the calibration method are not repeated here.
  • the present invention proposes a method, device and computer-readable storage medium for correcting a displayed image.
  • the method includes: extracting a reference luminance image of an image to be detected, and obtaining a reference luminance image of the reference luminance image Extract the feature points of the reference brightness image, perform perspective transformation processing on the reference brightness image based on the feature points, and obtain a reference brightness image through perspective transformation processing; Correct the reference brightness image through perspective transformation processing processing to obtain a corrected image for display in the target display area.
  • the image to be detected is subjected to binarization, perspective transformation, and correction processing to obtain an image that can completely fill the display panel, thereby improving the display effect of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

公开了一种显示图像的校正方法、设备及计算机可读存储介质,该方法包括:从待检测图像中提取基准亮度图像;提取基准亮度图像的特征点,基于特征点对基准亮度图像进行透视变换处理;对经过透视变换处理的基准亮度图像进行校准处理,得到对应的校正图像。

Description

显示图像的校正方法、设备及计算机可读存储介质
本申请要求于2020年10月27日提交中国专利局、申请号为202011167154.1、发明名称为“显示图像的校正方法、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种显示图像的校正方法、设备及计算机可读存储介质。
背景技术
显示面板在显示相机采集到的显示图像时,会由于显示面板摆放倾斜、相机拍摄角度偏差、相机镜头畸变等原因,使得显示在显示面板上的显示图像发生一定的变形,如此会使得显示效果不佳。
技术问题
本发明提供一种显示图像的校正方法、设备及计算机可读存储介质,旨在提高显示图像的显示效果。
技术解决方案
为实现上述目的,本发明提供显示图像的校正方法,所述方法包括:
从待检测图像中提取基准亮度图像;
提取所述基准亮度图像的特征点;
基于所述特征点对所述基准亮度图像进行透视变换处理;
对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
可选地,所述基于所述特征点对所述基准亮度图像进行透视变换处理,包括:
检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
可选地,所述基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像的步骤包括:
基于所述至少四个目标角点从所述基准亮度图像中提取多个初始子区域;
从所述至少四个绝对角点所围成的区域中提取多个基本子区域,其中所述基本子区域与所述初始子区域对应;
基于至少四个所述目标角点的初始坐标值确定各个初始子区域的第一特征坐标值,基于所述至少四个绝对角点的校正坐标值确定各个基本子区域的第二特征坐标值;
分别将所述第一特征坐标值及其对应的所述第二特征坐标值代入透视变换公式,以确定多个分区透视变换矩阵;
分别基于所述各个分区透视变换矩阵将对应的初始子区域进行变换,各个初始子区域变换完成后获得经过透视变换处理的基准亮度图像。
可选地,所述根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值的步骤包括:
获取所述基准亮度图像中各个目标角点的像素值,将像素值为预设值的目标角点的坐标值标记为初始坐标值;
根据所述初始坐标值确定x轴方向的最大值、最小值和y轴方向的最大值、最小值;
根据所述x轴方向的最大值、最小值和所述y轴方向的最大值、最小值确定至少四个绝对角点的校正坐标值。
可选地,所述提取待检测图像的基准亮度图像,之后还包括:
从待检测图像中提取出预设像素阶显示的基准亮度图像;
将所述基准亮度图像中每个点的像素值标注为预设像素值或0,获得基准亮度图像的二值化图像。
可选地,所述对经过透视变换处理的基准亮度图像进行校正处理,包括:
检测并获取所述经过透视变换处理的基准亮度图像的轮廓;
基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理。
可选地,所述基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理,包括:
对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行畸变校正,获得用于显示在目标显示区域的校正图像。
可选地,所述基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理,包括:
通过修正因子对所述经过透视变换处理的基准亮度图像的轮廓进行校正。
此外,为实现上述目的,本发明还提供一种显示图像的校正设备,所述显示图像的校正设备包括处理器,存储器以及存储在所述存储器中的显示图像的校正程序,所述显示图像的校正程序被所述处理器运行时,实现如上任一项所述的显示图像的校正方法的步骤。
此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有显示图像的校正程序,所述显示图像的校正程序被处理器运行时实现如上所述显示图像的校正方法的步骤。
有益效果
相比现有技术,本发明提供一种显示图像的校正方法、设备及计算机可读存储介质,该方法包括:从待检测图像中提取基准亮度图像;提取所述基准亮度图像的特征点,基于所述特征点对所述基准亮度图像进行透视变换处理;对经过透视变换处理的基准亮度图像进行校准处理,得到对应的校正图像。由此对待检测图像进行透视变换、校正处理,获得可以完整填充显示面板的图像,提高了图像的显示效果。
附图说明
图1是本发明各实施例涉及的显示图像的校正设备的硬件结构示意图;
图2是本发明显示图像的校正方法第一实施例的流程示意图;
图3是本发明显示图像的校正方法第一实施例的第一场景示意图;
图4是本发明显示图像的校正方法第一实施例的第二场景示意图;
图5是本发明显示图像的校正方法第二实施例的流程示意图;
图6是本发明显示图像的校正方法第二实施例的第一场景示意图;
图7是本发明显示图像的校正装置第一实施例的功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例主要涉及的显示图像的校正设备是指能够实现网络连接的网络连接设备,所述显示图像的校正设备可以是服务器、云平台等。
参照图1,图1是本发明各实施例涉及的显示图像的校正设备的硬件结构示意图。本发明实施例中,显示图像的校正设备可以包括处理器1001(例如中央处理器Central Processing Unit、CPU),通信总线1002,输入端口1003,输出端口1004,存储器1005。 其中,通信总线1002用于实现这些组件之间的连接通信;输入端口1003用于数据输入;输出端口1004用于数据输出,存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器,存储器1005可选的还可以是独立于前述处理器1001的存储装置。本领域技术人员可以理解,图1中示出的硬件结构并不构成对本发明的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
继续参照图1,图1中作为一种可读存储介质的存储器1005可以包括操作系统、网络通信模块、应用程序模块以及显示图像的校正程序。在图1中,网络通信模块主要用于连接服务器,与服务器进行数据通信;而处理器1001可以调用存储器1005中存储的显示图像的校正程序,并执行本发明实施例提供的显示图像的校正方法。
本发明实施例提供了一种显示图像的校正方法。
参照图2,图2是本发明显示图像的校正方法第一实施例的流程示意图。
本实施例中,所述显示图像的校正方法应用于显示图像的校正设备,所述方法包括:
步骤S101,从待检测图像中提取基准亮度图像;
步骤S102,提取所述基准亮度图像的特征点;
步骤S103,基于所述特征点对所述基准亮度图像进行透视变换处理;
步骤S104,对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
本实施例中,所述显示图像至少可以显示在LCD(Liquid Crystal Display,液晶显示器)或Mini LED(Light Emitting Diode,发光二极管)显示屏上,
具体地,所述步骤S101涉及的显示图像可以是RGB(red green blue,红绿蓝)色彩模式的彩色图像,并且所述显示图像有不同的灰阶。灰阶是指显示器最暗的黑到最亮的白之间的亮度层级关系,是明暗对比和黑白颜色过渡方面的表现,图像越清晰,过渡越自然则越好。一般地,灰阶主要采用32级灰阶和256级灰阶。
具体地,所述步骤S101之后包括:从待检测图像中提取出预设像素阶显示的基准亮度图像;
进一步地,将所述基准亮度图像中各个点的像素值标注为预设像素值或0,获得基准亮度图像的二值化图像。
具体地,将工业相机的成像数据转换成图像数据格式的待检测图像。从所述待检测图像中提取指定像素灰阶显示的基准亮度图像。获取各个像素点的像素,将像素大于预设像素值的像素点的像素值标注为指定像素值,将像素小于或等于预设像素值的像素点的像素值标注为0,如此,即可将所述基准亮度图像中各个点的像素值标注为指定像素值或0,并 且将二值化参数设定为0.25×max(I),其中max(I)为最大像素值。由此,即可获得所述基准亮度图像的基准亮度图像。本实施例中,所述指定像素可以是255、32等灰阶像素,对应的max(I)为255、32等。
进一步地,当获得所述基准亮度图像后,对所述基准亮度图像的显示区域进行粗定位,获得更加精确的粗定位后的基准亮度图像的二值化图像。具体地,检测所述基准亮度图像的二值化轮廓,并计算所述二值化轮廓的轮廓面积。本实施例中可以基于tensorflow神经卷积网络实现二值化轮廓的提取,在轮廓提取的过程中,当所述轮廓面积大于第一预设轮廓面积阈值时,则将对应的二值化轮廓标记为有效二值化轮廓。进一步地,在所述有效二值化轮廓中提取最小外接矩形,再基于所述最小外接矩形的坐标进行扩张,获得粗定位后的基准亮度图像。最小外接矩形(minimum bounding rectangle,MBR)指以二维坐标表示的若干二维形状(例如点、直线、多边形)的最大范围,即以给定的二维形状各顶点中的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标定下边界的矩形。
将所述最小外接矩形的左上顶点的x、y坐标都减去预设值,将所述最小外接矩形的左下顶点x坐标减去预设值,将所述最小外接矩形的右下顶点x、y坐标都加上预设值,保持所述最小外接矩形的右上顶点x、y坐标不变,即可获得粗定位后的基准亮度图像。具体地,参照图3,图3是本发明显示图像的校正方法第一实施例的第一场景示意图,将所述有效二值化轮廓表示为实线框a,在所述有效二值化轮廓中提取的最小外接矩形为图3中虚线矩形框b,粗定位后的轮廓为图3中虚线框c,并且所述预设值的大小与最小外接矩形和所述粗定位后的轮廓相关,所述预设值的大小可以是如图3中示出的d。
此外,由于Mini LED显示屏由无数个独立的分离的Mini LED灯珠组成。Mini LED,与LCD整个区域均匀发光不同,Mini LED显示器局部为离散的灯珠,无数个这样的灯珠组成了一块整体发光的Mini LED显示器。所以在自适应二值化处理后需进行图形学处理。图形学出来的具体过程是,首先构造膨胀卷积核,可以将卷积核大小设为(20,20),膨胀类型可以为椭圆结构,然后采用膨胀处理将灯珠一个个离散区域膨胀至与显示图像对应的连续的显示区域。
当获取到基准亮度图像的二值化图像后,则执行步骤S102-步骤S103:提取所述基准亮度图像的特征点;基于所述特征点对所述基准亮度图像进行透视变换处理。
值得说明的是所述步骤S102-S103中的所述基准亮度图像是指所述基准亮度图像的二值化图像。
透视变换(Perspective Transformation)是指利用透视中心、像点、目标点三点共线 的条件,按透视旋转定律使承影面(透视面)绕迹线(透视轴)旋转某一角度,破坏原有的投影光线束,仍能保持承影面上投影几何图形不变的变换。
一般地,可以将透视变换公式表示为:
Figure PCTCN2021124839-appb-000001
其中,[x 0,y 0,z 0]表示基准亮度图像的初始坐标值,[x 1,y 1,z 1]表示矫正后的初步矫正图像的矫正坐标值,
Figure PCTCN2021124839-appb-000002
表示透视变换矩阵。如此,根据矫正前后的多个对应点的坐标值即可确定透视变换矩阵,再基于所述透视变换矩阵即可对待变换的基准亮度图像进行变换。
具体地,可以通过角点检测方法提取所述基准亮度图像的多个角点,再从所述多个角点中提取所述基准亮度图像的多个特征点,所述多个特征点可以是所述基准亮度图像四个角的角点、四条边的边线中点等这8个点中的多个特征点。当确定基准亮度图像的多个特征点后,基于所述多个特征点确定矫正后的初步矫正图像的坐标值。可以理解地,理想情况下,显示在显示面板上的显示图像是一定大小的长方形图像,并且该长方形图像与所述基准亮度图像的大小基本相同,只是略有微小差异,因此可以根据所述基准亮度图像的多个特征点的坐标值确定矫正后的初步矫正图像的坐标值。例如,获取所述多个特征点的坐标值中x和y轴方向的最大x值x max、最小x值x min、最大y值y max、最小y值y min。然后将矫正后的初步矫正图像的坐标值确定为(x min,y max),(x min,y min),(x max,y min),(x max,y max)。
如此,基于所述基准亮度图像的多个特征点的坐标值和所述矫正后的初步矫正图像的坐标值即可确定透视变换矩阵,基于所述透视变换矩阵即可对待变换的基准亮度图像进行变换,获得校正图像。
如此,基于所述基准亮度图像的至少四个角点的坐标值和所述校正后的经过透视变换处理的基准亮度图像的坐标值即可确定透视变换矩阵,基于所述透视变换矩阵即可对待变换的基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像,一般地,所述经过透视变换处理的基准亮度图像是所述基准亮度图像的二值化图像的透视投影图像。
当获得所述经过透视变换处理的基准亮度图像后,执行所述步骤S104:对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
具体地,检测并获取所述经过透视变换处理的基准亮度图像的轮廓,基于所述经过透 视变换处理的基准亮度图像的轮廓进行校正处理。
具体地,提取所述轮廓的最小外接矩形,获得落在所述最小外接矩形内的经过透视变换处理的基准亮度图像;对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行畸变校正,获得用于显示在目标显示区域的校正图像。
具体地,基于tensorflow神经卷积网络实现轮廓的提取,在轮廓提取的过程中,计算轮廓的轮廓面积,当所述轮廓面积大于第二预设轮廓面积阈值时,则将对应的轮廓标记为有效轮廓。并进一步提取所述轮廓的最小外接矩形,获得落在所述最小外接矩形内的经过透视变换处理的基准亮度图像。
此外,还可以通过修正因子对经过透视变换处理的基准亮度图像的轮廓进行校正,获得经过透视变换处理的基准亮度图像。具体地,首先将所述轮廓若干个顶点坐标,基于所述顶点坐标确定各个边界的修正因子,然后基于各个边界的所述修正因子对所述轮廓进行修正,获得经过透视变换处理的基准亮度图像。具体地,将轮廓的左上顶点、右下顶点确定为(x 1,y 1),(x 2,y 2),判断水平定位矩形的各条边线上各个像素点的像素是否等于特定像素值,本实施例中所述特定像素值可以是255。具体地,所述水平定位矩形等宽线包括上边线、下边线、左边线以及右边线。
本实施例中,将所述上边线的坐标值表示为I up([x 1,x 2],d 1),d 1的初始值为y 1,判断所述上边线各个像素点中是否存在像素值为特定像素值的像素点,若所述上边线各个像素点中存在一个或多个像素值为特定像素值的像素点,则将上边线的修正因子确定为d 1-y 1,若所述上边线各个像素点中不存在一个或多个像素值为特定像素值的像素点,则将上边线的修正因子确定为d 1-1。
将所述下边线的坐标值表示为I down([x 1,x 2],d 2),d 1的初始值为y 2,判断所述下边线各个像素点中是否存在像素值为特定像素值的像素点,若所述下边线各个像素点中存在一个或多个像素值为特定像素值的像素点,则将下边线的修正因子确定为y 2-d 2,若所述下边线各个像素点中不存在一个或多个像素值为特定像素值的像素点,则将下边线的修正因子确定为d 2-1。
将所述左边线的坐标值表示为I left(d 3,[y 1,y 2]),d 3的初始值为x 1,判断所述左边线各个像素点中是否存在像素值为特定像素值的像素点,若所述左边线各个像素点中存在一个或多个像素值为特定像素值的像素点,则将左边线的修正因子确定为d 3-x 1,若所述左边线各个像素点中不存在一个或多个像素值为特定像素值的像素点,则将左边线的修正因子确定为d 3+1。
将所述右边线的坐标值表示为I rightt(d 4,[y 1,y 2]),d 3的初始值为x 1,判断所述右边线各个像素点中是否存在像素值为特定像素值的像素点,若所述右边线各个像素点中存在一个或多个像素值为特定像素值的像素点,则将右边线的修正因子确定为x 2-d 4,若所述右边线各个像素点中不存在一个或多个像素值为特定像素值的像素点,则将右边线的修正因子确定为d 4-1。
基于上述各个修正因子,将所述经过透视变换处理的基准亮度图像的左上顶点、右下顶点(x′ 1,y′ 1),(x′ 2,y′ 2)分别确定为:(x′ 1,y′ 1)=(d 3,d 1),(x′ 2,y′ 2)=(d 4,d 2)。再基于所述经过透视变换处理的基准亮度图像的左上顶点、右下顶点的坐标,即可确定经过透视变换处理的基准亮度图像的区域。
进一步地,由于相机在拍摄过程中可能存在拍摄角度偏差、相机镜头畸变等会导致图像异常的原因,因此拍摄到的图像可能会发生桶形失真的径向畸变。本实施例中,基于除法模型对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行校正。具体地,对所述轮廓的各边缘轮廓分别利用快速圆弧提取法进行圆弧提取,得到各边缘对应的圆弧,并分别计算各圆弧的参数;以所述最小外接矩形内的经过透视变换处理的基准亮度图像为中心划定畸变中心预选区域,基于圆的一般方程并根据各圆弧的参数计算以畸变中心预选区域中的各像素点作为畸变中心对应的各圆弧的畸变系数,统计以各像素点作为畸变中心对应的各圆弧的畸变系数的取值集中区间并统计各取值集中区间中的畸变系数数量,计算以各像素点作为畸变中心对应的取值集中区间中所有畸变系数的均值,以畸变系数数量最多的取值集中区间对应的像素点作为实际畸变中心,以畸变系数数量最多的取值集中区间中所有畸变系数的均值作为实际畸变系数;根据所述实际畸变中心和实际畸变系数对畸变图像进行自动校正,得到用于显示在目标显示区域的校正图像。如此,所述校正图像可以完整填充在显示面板。参照图4,图4是本发明显示图像的校正方法第一实施例的第二场景示意图,透视变换后获得的经过透视变换处理的基准亮度图像(图4右边)并非完整的矩形,经过二次校正后,即可获得完整的矩形(图4左边)。
本实施例通过上述方案,从待检测图像中提取基准亮度图像;提取所述基准亮度图像的特征点,基于所述特征点对所述基准亮度图像进行透视变换处理;对经过透视变换处理的基准亮度图像进行校准处理,得到对应的校正图像。由此对待检测图像进行透视变换、校正处理,获得可以完整填充显示面板的图像,提高了图像的显示效果。
如图5所示,本发明第二实施例提出一种显示图像的校正方法,基于上述图2所示的 第一实施例,所述基于所述至少四个特征点对所述基准亮度图像进行透视变换,获得经过透视变换处理的基准亮度图像的步骤包括:
步骤S201:检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
步骤S202:根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
步骤S203:基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
本实施例中,通过Harris角点检测方法检测所述基准亮度图像中的至少四个角点。Harris角点提取算法是Chris Harris和Mike Stephens在H.Moravec算法的基础上发展出的通过自相关矩阵的角点提取算法,又称Plessey算法。Harris角点提取算法这种算子受信号处理中自相关面数的启发,给出与自相关函数相联系的矩阵M。M阵的特征值是自相关函数的一阶曲率,如果两个曲率值都高,那么就认为该点是角点特征。首先,利用水平,竖直差分算子对图像的每个像素进行滤波获得滤波像素,并基于滤波像素确定滤波矩阵;对滤波矩阵中的滤波值进行高斯平滑,消除不必要的孤立点和凸起,获得新的滤波矩阵,并基于新的滤波矩阵获得各个像素的角点响应函数,将所述基准亮度图像中角点响应函数大于角点阈值的像素点确定为角点,如此,即可获得至少四个角点。
由于角点数量比较多,并且透视变换一般只需要4个点即可。因此还需要对所述角点进行过滤,获得目标角点。本实施例中,基于各个角点的坐标值进行过滤。
具体地,首先确定所有角点中坐标在x轴方向和x轴方向的最大值和最小值并分别表示为:x min,x max,y min,y max,然后计算所有角点中坐标在X轴方向的中点值x mid和Y轴方向的中点值y mid,其中
Figure PCTCN2021124839-appb-000003
然后计算每个角点分别到左上、右上、左下、右下四个不同的极值点,以及每个角点到四个边界中心点的距离d,距离计算公式计算如下(值得注意的是图像坐标轴中,图像的左上角为坐标原点):
角点至左上极值点的距离d1:
Figure PCTCN2021124839-appb-000004
角点至右上极值点的距离d2:
Figure PCTCN2021124839-appb-000005
角点至左下极值点的距离d3:
Figure PCTCN2021124839-appb-000006
角点至右下极值点的距离d4:
Figure PCTCN2021124839-appb-000007
角点至上边界中心点的距离d5:
Figure PCTCN2021124839-appb-000008
角点至下边界中心点的距离d6:
Figure PCTCN2021124839-appb-000009
角点至右边界中心点的距离d7:
Figure PCTCN2021124839-appb-000010
角点至左边界中心点的距离d8:
Figure PCTCN2021124839-appb-000011
当计算出各个距离后,将距离值最小的点确定为对应的目标角点,本实施例中,所述目标角点包括目标顶点和中心角点。其中,所述目标顶点包括左上顶点角点、右上顶点角点、左下顶点角点、右下顶点角点,所述中心角点包括:上边界中心角点、下边界中心角点、左边界中心角点、右边界中心角点。具体地,由于最小距离值接近0,因此将各个目标角点分别确定为:
左上顶点角点:P left_up=Min(d 1(p i));
右上顶点角点:P right_up=Min(d 2(p i));
左下顶点角点:P left_down=Min(d 3(p i));
右下顶点角点:P right_down=Min(d 4(p i));
上边界中心角点:C up=Min(d 5(p i));
下边界中心角点:C down=Min(d 6(p i));
左边界中心角点:C left=Min(d 7(p i));
右边界中心角点:C right=Min(d 8(p i))。
进一步参照图3,图3中P表示基准亮度图像的右上角点,C表示基准亮度图像的右边界中心点。
当获得目标角点之后,则执行步骤S202:根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素确定至少四个绝对角点的校正坐标值。所述绝对角点是所述经过透视变换处理的基准亮度图像的特征点,所述绝对角点与基准亮度图像中各个目标角点对应。
所述步骤S202包括:获取所述基准亮度图像中各个目标角点的像素值,将像素值为预 设值的目标角点的坐标值标记为初始坐标值;根据所述初始坐标值确定x轴方向的最大值、最小值和y轴方向的最大值、最小值;根据所述x轴方向的最大值、最小值和所述y轴方向的最大值、最小值确定至少四个绝对角点的校正坐标值。
具体地,首先,将所述目标角点的初始坐标值表示为(x i0,y i0),其中i=1,2.....8,i对应于相应的顶点坐标,1-8分别对应于P left_up,P right_up,P left_down,P left_down,C up,C down,C left,C right
然后,在所述基准亮度图像中判断各个目标角点的目标像素值是否为指定像素值,所述指定像素值可以是255。将目标像素值表示为I i(x,y),将目标像素值是为指定像素值的目标角点的初始坐标值添加到预设坐标集合,本实施例中将所述预设目标集合表示为S i(x),S i(y),其中i=1,2.....8,其中x∈[max(x i0-a,0),min(x i0-a,w)],y∈[max(y i0-a,0),min(y i0-a,h)],a为一个设定的固定常值,a的取值范围为0-255,w,h为所述基准亮度图像的横向、纵向像素数。
进一步地,基于所述S i(x),S i(y)计算绝对顶点和边界中点位置坐标值(x i1,y i1),其中i=1,2.....8,分别对应绝对顶点角点和边界中心角点P′ left_up,P′ right_up,P′ left_down,P′ right_down,C′ up,C′ down,C′ left,C′ right
基于所述预设坐标集合中的所述初始坐标值筛选出x和y轴方向的最大值、最小值;根据所述x和y轴方向的最大值、最小值确定对应的至少四个绝对角点的校正坐标值。本实施例中,将各个角点坐标及其校正坐标值确定结果为:
左上绝对顶点角点坐标及其校正坐标值:(x 11,y 11)=(min(S 1(x)),min(S 1(y)));
右上绝对顶点角点坐标及其校正坐标值:(x 21,y 21)=(max(S 2(x)),min(S 2(y)));
左下绝对顶点角点坐标及其校正坐标值:(x 31,y 31)=(min(S 3(x)),max(S 3(y)));
右下绝对顶点角点坐标及其校正坐标值:(x 41,y 41)=(max(S 4(x)),max(S 4(y)));
上边界绝对中心点坐标及其校正坐标值:(x 51,y 51)=(x 50,min(S 5(y)));
下边界绝对中心点坐标及其校正坐标值:(x 61,y 61)=(x 60,max(S 2(y)));
左边界绝对中心点坐标及其校正坐标值:(x 71,y 71)=(min(S 7(x)),y 70);
右边界绝对中心点坐标及其校正坐标值:(x 81,y 81)=(max(S 8(x)),y 80);
如此,按上述流程进行处理,即可获得各个绝对顶点的坐标。继续参照图3,图3中P1表示基准亮度图像的又上绝对角点,C1表示基准亮度图像的右边界绝对中心点。
进一步地,执行步骤S203:基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变 换,获得经过透视变换处理的基准亮度图像。
具体地,基于所述至少四个目标角点从所述基准亮度图像中提取多个初始子区域;参照图6,图6是本发明显示图像的校正方法第二实施例的第一场景示意图。基于各个角点将所述基准亮度图像分割成四个初始区域Ⅰ,Ⅱ,Ⅲ,Ⅳ,将初始区域Ⅰ和初始区域Ⅲ的并集确定为第一初始子区域,将初始区域Ⅱ和初始区域Ⅳ的并集确定为第二初始子区域,将初始区域Ⅰ和初始区域Ⅳ的并集确定为第三初始子区域,将初始区域Ⅲ和初始区域Ⅳ的并集确定为第四初始子区域。
进一步地,从所述至少四个绝对角点所围成的区域中提取与所述多个初始子区域对应的多个基本子区域;基于各个绝对角点构造经过透视变换处理的基准亮度图像的四个基本区域。本实施例中,所述经过透视变换处理的基准亮度图像与所述基准亮度图像大尺寸相同,将所述经过透视变换处理的基准亮度图像宽、高分别标记为h,w,在所述基准亮度图像的各个顶点角点处分别构造所述经过透视变换处理的基准亮度图像的绝对顶点角点,其中左上绝对顶点、右上绝对顶点、左下绝对顶点、右下绝对顶点可分别表示为(b,b),(w-b,b),(b,h-b),(w-b,h-b),其中,b为设定的一个固定。各个绝对角点顶点所围成的矩形代表所述基准亮度图像中显示区域透视变换后的对应区域,由此可以把所述基准亮度图像中不规则显示区域映射成一个规则的矩形区域。
本实施例中,所述绝对角点包括绝对顶点角点和绝对边界中心点角点。所述各个所述绝对角点所围成的区域为一个四边形,再基于上边界绝对中心点和下边界绝对中心点的连线、左边界绝对中心点和右边界绝对中心点的连线,将经过透视变换处理的基准亮度图像分割成四个基本区域。具体地,如图6所示,四个基本区域分别为Ⅰ′,Ⅱ′,Ⅲ′,Ⅳ′。进一步地,基于分割后的基本区域,提取与所述多个待变换子图像对应的多个子区域,继续参照图6,基于所述四个基本区域Ⅰ′,Ⅱ′,Ⅲ′,Ⅳ′,将基本区域Ⅰ′和基本区域Ⅲ′的并集确定为第一初始子区域,将基本区域Ⅱ′和基本区域Ⅳ′的并集确定为第二基本子区域,将基本区域Ⅰ′和基本区域Ⅳ′的并集确定为第三基本子区域,将基本区域Ⅲ′和基本区域Ⅳ′的并集确定为第四基本子区域。可以理解地,在其它实施例中可以确定更多或更少个数的子区域。
进一步地,基于所述初始坐标值确定各个初始子区域的第一特征坐标值,基于所述校正坐标值确定各个基本子区域的第二特征坐标值;一般地,需要确定各个子区域的4个顶点角点的顶点角点特征坐标值。例如,将第一初始子区域的第一特征值为(x min,y min),(x mid,y min),(x max,y mid),(x max,y min)。再例如,将第一基本子区域的第一特征值为 (min(S 1(x)),min(S 1(y))),(x 50,min(S 5(y))),(min(S 3(x)),max(S 3(y))),(x 60,max(S 2(y)))。
分别基于所述第一特征坐标值及其对应的所述第二特征坐标值确定多个分区透视变换矩阵;可以理解地,由于从基准亮度图像中提取出的多个初始子区域、从构造的经过透视变换处理的基准亮度图像中提取出的多个基本子区域之间的坐标值存在些许差异,因此对应的多个分区透视变换矩阵也存在细微差异。本实施例中可以将第一初始子区域与第一基本子区域的第一分区透视变换矩阵表示为H 1;将第二初始子区域与第二基本子区域的第二分区透视变换矩阵表示为H 2;将第三初始子区域与第三基本子区域的第三分区透视变换矩阵表示为H 3;将第四初始子区域与第是基本子区域的第是分区透视变换矩阵表示为H 4
分别基于所述各个分区透视变换矩阵将对应的初始子区域进行变换,各个初始子区域变换完成后即可获得经过透视变换处理的基准亮度图像。本实施例中,可以预先设定变换顺序,例如根据H 1、H 2、H 3、H 4依次进行透视变换,依次获得分区经过透视变换处理的基准亮度图像。此外还可以获得基准亮度图像与经过透视变换处理的基准亮度图像的全区透视变换矩阵H 5,并在分区透视变换完成后,再进行全区透视变换,获得对应的全区经过透视变换处理的基准亮度图像。可以理解地,各个初始子区域变换完成后即可获得经过透视变换处理的基准亮度图像。
本实施例通过上述方案,检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素确定至少四个绝对角点的校正坐标值;基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。由此,通过分区透视校正,提高了经过透视变换处理的基准亮度图像准确性,有助于提高图像的显示效果。
此外,本实施例还提供一种显示图像的校正装置。参照图7,图7为本发明显示图像的校正装置第一实施例的功能模块示意图。
本实施例中,所述显示图像的校正装置为虚拟装置,存储于图1所示的显示图像的校正设备的存储器1005中,以实现显示图像的校正程序的所有功能:用于从待检测图像中提取基准亮度图像;用于提取所述基准亮度图像的特征点;用于基于所述特征点对所述基准亮度图像进行透视变换处理;用于对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
具体地,所述显示图像的校正装置包括:
第一提取模块10,用于从待检测图像中提取基准亮度图像;
第二提取模块20,用于提取所述基准亮度图像的特征点;
透视变换模块30,用于基于所述特征点对所述基准亮度图像进行透视变换处理;
校正模块40,用于对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
进一步地,所述透视变换模块还用于:
检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
进一步地,所述透视变换模块还用于:
基于所述至少四个目标角点从所述基准亮度图像中提取多个初始子区域;
从所述至少四个绝对角点所围成的区域中提取多个基本子区域,其中所述基本子区域与所述初始子区域对应;
基于至少四个所述目标角点的初始坐标值确定各个初始子区域的第一特征坐标值,基于所述至少四个绝对角点的校正坐标值确定各个基本子区域的第二特征坐标值;
分别将所述第一特征坐标值及其对应的所述第二特征坐标值代入透视变换公式,以确定多个分区透视变换矩阵;
分别基于所述各个分区透视变换矩阵将对应的初始子区域进行变换,各个初始子区域变换完成后获得经过透视变换处理的基准亮度图像。
进一步地,所述透视变换模块还用于:
获取所述基准亮度图像中各个目标角点的像素值,将像素值为预设值的目标角点的坐 标值标记为初始坐标值;
根据所述初始坐标值确定x轴方向的最大值、最小值和y轴方向的最大值、最小值;
根据所述x轴方向的最大值、最小值和所述y轴方向的最大值、最小值确定至少四个绝对角点的校正坐标值。
进一步地,所述第以提取模块还用于:
从待检测图像中提取出预设像素阶显示的基准亮度图像;
将所述基准亮度图像中每个点的像素值标注为预设像素值或0,获得基准亮度图像。
进一步地,所述校正模块还用于:
检测并获取所述经过透视变换处理的基准亮度图像的轮廓;
基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理。
进一步地,所述校正模块还用于:
对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行畸变校正,获得用于显示在目标显示区域的校正图像。
进一步地,所述校正模块还用于:
通过修正因子对所述经过透视变换处理的基准亮度图像的轮廓进行校正。
此外,本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有显示图像的校正程序,所述显示图像的校正程序被处理器运行时实现如上所述显示图像的校正方法的步骤,此处不再赘述。
相比现有技术,本发明提出的一种显示图像的校正方法、设备及计算机可读存储介质,该方法包括:提取待检测图像的基准亮度图像,并获取所述基准亮度图像的基准亮度图像;提取所述基准亮度图像的特征点,基于所述特征点对所述基准亮度图像进行透视变换处理,获得经过透视变换处理的基准亮度图像;将所述经过透视变换处理的基准亮度图像进行校正处理,获得用于显示在目标显示区域的校正图像。由此对待检测图像进行二值化、透视变换、校正处理,获得可以完整填充显示面板的图像,提高了图像的显示效果。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个可读存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备执行本发明各个实施例所述的方法。
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种显示图像的校正方法,其中,所述方法包括:
    从待检测图像中提取基准亮度图像;
    提取所述基准亮度图像的特征点;
    基于所述特征点对所述基准亮度图像进行透视变换处理;
    对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
  2. 根据权利要求1所述的方法,其中,所述基于所述特征点对所述基准亮度图像进行透视变换处理,包括:
    检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
    根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
    基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
  3. 根据权利要求2所述的方法,其中,所述基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像,包括:
    基于所述至少四个目标角点从所述基准亮度图像中提取多个初始子区域;
    从所述至少四个绝对角点所围成的区域中提取多个基本子区域,其中所述基本子区域与所述初始子区域对应;
    基于至少四个所述目标角点的初始坐标值确定各个初始子区域的第一特征坐标值,基于所述至少四个绝对角点的校正坐标值确定各个基本子区域的第二特征坐标值;
    分别将所述第一特征坐标值及其对应的所述第二特征坐标值代入透视变换公式,以确定多个分区透视变换矩阵;
    分别基于所述各个分区透视变换矩阵将对应的初始子区域进行变换,各个初始子区域变换完成后获得经过透视变换处理的基准亮度图像。
  4. 根据权利要求2所述的方法,其中,所述根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值,包括:
    获取所述基准亮度图像中各个目标角点的像素值,将像素值为预设值的目标角点的坐标值标记为初始坐标值;
    根据所述初始坐标值确定x轴方向的最大值、最小值和y轴方向的最大值、最小值;
    根据所述x轴方向的最大值、最小值和所述y轴方向的最大值、最小值确定至少四个绝对角点的校正坐标值。
  5. 根据权利要求1所述的方法,其中,所述提取待检测图像的基准亮度图像,包括:
    从待检测图像中提取出预设像素阶显示的基准亮度图像。
  6. 根据权利要求5所述的方法,其中,所述提取待检测图像的基准亮度图像之后还包括:
    将所述基准亮度图像中每个点的像素值标注为预设像素值或0,获得基准亮度图像的二值化图像。
  7. 根据权利要求1-6中任一项所述的方法,其中,所述对经过透视变换处理的基准亮度图像进行校正处理,包括:
    检测并获取所述经过透视变换处理的基准亮度图像的轮廓;
    基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理。
  8. 根据权利要求7所述的方法,其中,所述基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理,包括:
    对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行畸变校正,获得用于显示在目标显示区域的校正图像。
  9. 根据权利要求7所述的方法,其中,所述基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理,包括:
    通过修正因子对所述经过透视变换处理的基准亮度图像的轮廓进行校正。
  10. 一种显示图像的校正设备,其中,所述显示图像的校正设备包括处理器,存储器以及存储在所述存储器中的显示图像的校正程序,所述显示图像的校正程序被所述处理器运行,用于执行:
    从待检测图像中提取基准亮度图像;
    提取所述基准亮度图像的特征点;
    基于所述特征点对所述基准亮度图像进行透视变换处理;
    对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
  11. 根据权利要求10所述的校正设备,其中,在基于所述特征点对所述基准亮度图像进行透视变换处理时,所述处理器用于执行:
    检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
    根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
    基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
  12. 根据权利要求11所述的校正设备,其中,所述基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,在基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像时,所述处理器用于执行:
    基于所述至少四个目标角点从所述基准亮度图像中提取多个初始子区域;
    从所述至少四个绝对角点所围成的区域中提取多个基本子区域,其中所述基本子区域与所述初始子区域对应;
    基于至少四个所述目标角点的初始坐标值确定各个初始子区域的第一特征坐标值,基于所述至少四个绝对角点的校正坐标值确定各个基本子区域的第二特征坐标值;
    分别将所述第一特征坐标值及其对应的所述第二特征坐标值代入透视变换公式,以确定多个分区透视变换矩阵;
    分别基于所述各个分区透视变换矩阵将对应的初始子区域进行变换,各个初始子区域变换完成后获得经过透视变换处理的基准亮度图像。
  13. 根据权利要求11所述的校正设备,其中,在根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值时,所述处理器用于执行:
    获取所述基准亮度图像中各个目标角点的像素值,将像素值为预设值的目标角点的坐标值标记为初始坐标值;
    根据所述初始坐标值确定x轴方向的最大值、最小值和y轴方向的最大值、最小值;
    根据所述x轴方向的最大值、最小值和所述y轴方向的最大值、最小值确定至少四个 绝对角点的校正坐标值。
  14. 根据权利要求10所述的校正设备,其中,在提取待检测图像的基准亮度图像时,所述处理器用于执行:
    从待检测图像中提取出预设像素阶显示的基准亮度图像。
  15. 根据权利要求14所述的校正设备,其中,在提取待检测图像的基准亮度图像之后,所述处理器还用于执行:
    将所述基准亮度图像中每个点的像素值标注为预设像素值或0,获得基准亮度图像的二值化图像。
  16. 根据权利要求10-15任一项所述的校正设备,其中,在对经过透视变换处理的基准亮度图像进行校正处理时,所述处理器用于执行:
    检测并获取所述经过透视变换处理的基准亮度图像的轮廓;
    基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理。
  17. 根据权利要求16所述的校正设备,其中,在基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理时,所述处理器用于执行:
    对落在所述最小外接矩形内的经过透视变换处理的基准亮度图像进行畸变校正,获得用于显示在目标显示区域的校正图像。
  18. 根据权利要求16所述的校正设备,其中,在基于所述经过透视变换处理的基准亮度图像的轮廓进行校正处理时,所述处理器用于执行:
    通过修正因子对所述经过透视变换处理的基准亮度图像的轮廓进行校正。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有显示图像的校正程序,所述显示图像的校正程序被处理器运行,用于执行:
    从待检测图像中提取基准亮度图像;
    提取所述基准亮度图像的特征点;
    基于所述特征点对所述基准亮度图像进行透视变换处理;
    对经过透视变换处理的基准亮度图像进行校正处理,得到对应的校正图像。
  20. 根据权利要求19所述的计算机可读存储介质,其中,执行所述基于所述特征点对所述基准亮度图像进行透视变换处理的步骤,包括:
    检测所述基准亮度图像中的至少四个角点,按预设流程对所述至少四个角点进行过滤,获得至少四个目标角点;
    根据所述至少四个目标角点的初始坐标值及其在所述基准亮度图像中的像素值确定至少四个绝对角点的校正坐标值;
    基于所述至少四个目标角点的初始坐标值和所述至少四个绝对角点的校正坐标值确定透视变换矩阵,基于透视变换矩阵对所述基准亮度图像进行变换,获得经过透视变换处理的基准亮度图像。
PCT/CN2021/124839 2020-10-27 2021-10-20 显示图像的校正方法、设备及计算机可读存储介质 WO2022089263A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011167154.1 2020-10-27
CN202011167154.1A CN112308794A (zh) 2020-10-27 2020-10-27 显示图像的校正方法、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022089263A1 true WO2022089263A1 (zh) 2022-05-05

Family

ID=74331105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124839 WO2022089263A1 (zh) 2020-10-27 2021-10-20 显示图像的校正方法、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112308794A (zh)
WO (1) WO2022089263A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115276A (zh) * 2023-01-12 2023-11-24 荣耀终端有限公司 一种画面处理的方法、设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308794A (zh) * 2020-10-27 2021-02-02 深圳Tcl数字技术有限公司 显示图像的校正方法、设备及计算机可读存储介质
CN113539162B (zh) * 2021-07-02 2024-05-10 深圳精智达技术股份有限公司 一种显示面板的取像方法及装置
CN114445825A (zh) * 2022-02-07 2022-05-06 北京百度网讯科技有限公司 文字检测方法、装置、电子设备和存储介质
CN114927090B (zh) * 2022-05-30 2023-11-28 卡莱特云科技股份有限公司 一种异形led显示屏中灯点的排序方法、装置及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203433A (zh) * 2016-07-13 2016-12-07 西安电子科技大学 一种车辆监控图像中车牌位置自动提取并透视校正的方法
CN107169494A (zh) * 2017-06-01 2017-09-15 中国人民解放军国防科学技术大学 基于手持终端的车牌图像分割校正方法
US20180130241A1 (en) * 2016-11-08 2018-05-10 Adobe Systems Incorporated Image Modification Using Detected Symmetry
CN110060200A (zh) * 2019-03-18 2019-07-26 阿里巴巴集团控股有限公司 图像透视变换方法、装置及设备
CN110097054A (zh) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 一种基于图像投影变换的文本图像纠偏方法
CN112308794A (zh) * 2020-10-27 2021-02-02 深圳Tcl数字技术有限公司 显示图像的校正方法、设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203433A (zh) * 2016-07-13 2016-12-07 西安电子科技大学 一种车辆监控图像中车牌位置自动提取并透视校正的方法
US20180130241A1 (en) * 2016-11-08 2018-05-10 Adobe Systems Incorporated Image Modification Using Detected Symmetry
CN107169494A (zh) * 2017-06-01 2017-09-15 中国人民解放军国防科学技术大学 基于手持终端的车牌图像分割校正方法
CN110060200A (zh) * 2019-03-18 2019-07-26 阿里巴巴集团控股有限公司 图像透视变换方法、装置及设备
CN110097054A (zh) * 2019-04-29 2019-08-06 济南浪潮高新科技投资发展有限公司 一种基于图像投影变换的文本图像纠偏方法
CN112308794A (zh) * 2020-10-27 2021-02-02 深圳Tcl数字技术有限公司 显示图像的校正方法、设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115276A (zh) * 2023-01-12 2023-11-24 荣耀终端有限公司 一种画面处理的方法、设备及存储介质

Also Published As

Publication number Publication date
CN112308794A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2022089263A1 (zh) 显示图像的校正方法、设备及计算机可读存储介质
CN107507558B (zh) 一种led显示屏的校正方法
CN111563889B (zh) 基于计算机视觉的液晶屏幕Mura缺陷检测方法
CN108760767B (zh) 基于机器视觉的大尺寸液晶屏缺陷检测方法
EP1638345A1 (en) Method for calculating display characteristic correction data, program for calculating display characteristic correction data, and device for calculating display characteristic correction data
WO2022089082A1 (zh) 显示图像调整方法、终端设备及计算机可读存储介质
US8310499B2 (en) Balancing luminance disparity in a display by multiple projectors
JP2006121713A (ja) コントラストの強調
CN112669394A (zh) 一种用于视觉检测系统的自动标定方法
US20160343143A1 (en) Edge detection apparatus, edge detection method, and computer readable medium
CN115170669A (zh) 基于边缘特征点集配准的识别定位方法及系统、存储介质
CN113286135A (zh) 图像校正方法及设备
CN114820417A (zh) 图像异常检测方法、装置、终端设备和可读存储介质
CN112801947A (zh) 一种led显示终端坏点的视觉检测方法
CN109191516B (zh) 结构光模组的旋转纠正方法、装置及可读存储介质
CN116912233B (zh) 基于液晶显示屏的缺陷检测方法、装置、设备及存储介质
CN115423821A (zh) Led屏幕拼缝区域图像的分割方法、led屏幕亮暗线校正方法
CN111256950A (zh) 不均校正数据生成方法及不均校正数据生成系统
CN114674826A (zh) 基于布匹的视觉检测方法及检测系统
JP2002328096A (ja) 構造物に形成されたひび割れ欠陥検出プログラム、ひび割れ欠陥検出方法及びひび割れ欠陥検出システム
JP2003167529A (ja) 画面欠陥検出方法及び装置並びに画面欠陥検出のためのプログラム
CN115953981A (zh) 异形平面屏灯点定位方法以及亮度信息的获得方法
CN113554672B (zh) 一种基于机器视觉的气密性检测中相机位姿检测方法及系统
CN115147389A (zh) 图像处理方法、设备以及计算机可读存储介质
CN114359414A (zh) 镜头脏污识别方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21884991

Country of ref document: EP

Kind code of ref document: A1