CN116758163A - Optical information extraction method and device and spherical display screen correction method and device - Google Patents

Optical information extraction method and device and spherical display screen correction method and device Download PDF

Info

Publication number
CN116758163A
CN116758163A CN202311020202.8A CN202311020202A CN116758163A CN 116758163 A CN116758163 A CN 116758163A CN 202311020202 A CN202311020202 A CN 202311020202A CN 116758163 A CN116758163 A CN 116758163A
Authority
CN
China
Prior art keywords
point
coordinates
row
module
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311020202.8A
Other languages
Chinese (zh)
Other versions
CN116758163B (en
Inventor
苗静
郑喜凤
张曦
毛新越
曹慧
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Cedar Electronics Technology Co Ltd
Original Assignee
Changchun Cedar Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Cedar Electronics Technology Co Ltd filed Critical Changchun Cedar Electronics Technology Co Ltd
Priority to CN202311020202.8A priority Critical patent/CN116758163B/en
Publication of CN116758163A publication Critical patent/CN116758163A/en
Application granted granted Critical
Publication of CN116758163B publication Critical patent/CN116758163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

An optical information extraction method and device and a spherical display screen correction method and device relate to LED display screen correction. Aiming at the technical problems that the correction technology has difficulty for a spherical LED screen in the prior art, the technical scheme provided by the invention is as follows: the optical information extraction method comprises the following steps: removing dark noise from the picture; filtering; performing binarization treatment; extracting the outline of the spot light, and obtaining the centroid coordinate of each outline; generating an image with the gray value of 0 and the same resolution, wherein the gray value of a pixel point of a centroid coordinate is set to 255; obtaining the distance between the lamp points in the box body; acquiring a preset spherical screen radius, pixel resolution and the row number of the spliced box body; searching and arranging mass center coordinates of all the lamp points in the current box body side by side; and obtaining a mass center sequence corresponding to the row and column information of the lamp points in the box body one by one. And extracting and collecting pixel values within a certain range around each centroid sequence in the picture, and adding the pixel values to calculate a correction coefficient matrix. The method is suitable for correcting the spherical display screen.

Description

Optical information extraction method and device and spherical display screen correction method and device
Technical Field
The invention relates to LED display screen correction, in particular to a spherical LED display screen correction method.
Background
The correction of the LED display screen is an important work, and can solve the consistency problem caused by the brightness difference in the production process of the LED. The LED display screen is widely applied to various occasions including commercial media, cultural performance, information transmission, news release and the like due to the characteristics of bright color, wide dynamic range, high brightness, long service life and the like. However, in the production of LED light emitting diodes, there may be a brightness difference between different LED elements due to some manufacturing factors. When these LED elements are assembled into an LED display screen, the brightness uniformity of the entire screen is affected, thereby reducing the quality of the display effect. Therefore, the LED display screen needs to be corrected to ensure that the brightness and color of the individual LED elements are consistent.
The correction technology for the traditional LED display screen is relatively mature, and the main method is to shoot a specific image on the LED display screen through a camera, and process and analyze the image. First, luminance and chromaticity information of each pixel point on the LED display screen can be obtained by photographing a specific image of the LED display screen one or more times. The position of each LED element can then be determined and its optical information extracted by a lamp location algorithm. From these optical information, a compensation coefficient or correction coefficient for each LED element can be calculated. And finally, sending the compensation coefficients to a control system of the LED display screen, and performing differential adjustment on the brightness and the chromaticity of each LED element through the control system so that the brightness and the chromaticity of each LED element on the whole LED display screen are consistent.
However, as the individual demands of LED displays continue to increase, an increasing variety of shaped displays, including spherical LED screens, are emerging. The spherical LED screen is widely applied to places such as museums, science and technology libraries, enterprise exhibition halls, exhibition halls and the like due to the unique appearance and visual effect of the spherical LED screen. Spherical LED screens are typically composed of a plurality of rows of isosceles trapezoid boxes, as shown in fig. 1, each row containing a plurality of boxes of the same size. The modules inside each box body also adopt isosceles trapezoid designs. A plurality of isosceles trapezoid modules are combined to form an isosceles trapezoid box body, mirror image mapping is carried out at the equator line, and the layer on the uppermost (lower) surface is a circular spherical cover formed by isosceles triangles.
However, the shape design of the spherical LED screen brings certain difficulties to the optical information positioning and extraction of the LED pixel points. Because the quantity of the LED lamp points of the isosceles trapezoid designed box body in the transverse direction and the longitudinal direction is inconsistent, the traditional LED display screen correction algorithm is poor in applicability and low in positioning accuracy, and therefore the correction effect is affected. Therefore, a general, simple and easy lamp positioning method suitable for the spherical LED screen formed by isosceles trapezoids is urgently needed.
For the correction of spherical LED screens, a new lamp positioning method can be considered. The method can accurately position the position of each pixel point according to the geometric characteristics of the spherical LED screen and the distribution rule of the optical information of the pixel points. By arranging specific position mark points on the spherical LED screen, the position information of the LED pixel points can be accurately determined by combining camera shooting and image processing algorithms. Meanwhile, the number and arrangement rule of the LED lamp points in each row can be calculated according to the geometric relationship of the spherical surface and the size parameters of each isosceles trapezoid box body, and then the positions of the LED pixel points are determined. Based on the positioning information, the spherical LED screen can be corrected, and consistency of brightness and chromaticity of each pixel point on the whole screen is realized.
In short, the LED display screen correction technology plays an important role in solving the problem of display consistency caused by the brightness difference of the LED. The traditional LED display screen correction technology is relatively mature, but certain difficulties exist for special-shaped display screens, particularly spherical LED screens. At present, a general, simple and easy lamp point positioning method suitable for a spherical LED screen needs to be developed so as to improve the correction effect and accuracy of the LED display screen. This will help to meet the needs of people for personalized LED displays and drive further developments in the LED display industry.
Disclosure of Invention
Aiming at the technical problems that the traditional LED display screen correction technology is relatively mature in the prior art, but has certain difficulty for special-shaped display screens, particularly spherical LED screens, a general, simple and feasible lamp point positioning method suitable for the spherical LED screens needs to be developed so as to improve the correction effect and accuracy of the LED display screens, the invention provides the following technical scheme:
the optical information extraction method is based on a three-primary-color picture P1 of a point by point or a point separated point of one box body in a first row of box bodies of a half-screen on an LED spherical screen, and comprises the following steps:
removing dark noise from the picture to obtain P2;
filtering the P2 to obtain P3;
a step of binarizing the P3 to obtain P4;
extracting the outlines of the light spots in the P4, and obtaining the barycenter coordinates of each outline;
collecting an image P5 with gray value of 0 and resolution of the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in P5 to 255 to obtain P6;
obtaining the distance between the lamp points in the box body according to the P6;
collecting preset data, wherein the preset data comprise the radius of a spherical screen, pixel resolution and the number of rows of spliced boxes;
searching and arranging the presumption coordinates of all the lamp points in the current box body according to the interval and preset data;
and obtaining a mass center sequence corresponding to the row and column information of the lamp points in the box body one by one.
Further, there is provided a preferred embodiment, wherein in the estimating step, the coordinates of the light point in the first row and the first column are collected as estimated reference coordinates.
Further, there is provided a preferred embodiment wherein all of the estimated coordinates on P6 are scanned by progressive scanning using the estimated reference coordinates as a starting point.
Further, a preferred embodiment is provided, and the centroid coordinates of all the light points in the current box body are searched and arranged side by side, so that centroid sequence modes corresponding to the light point row-column information in the box body one by one are as follows:
and taking the pixel points with gray values of 255 in the preset range around each estimated coordinate as the centroid coordinates of the lamp points corresponding to the current estimated coordinate.
Based on the same inventive concept, the invention also provides an optical information extraction device, which is based on three primary color pictures P1 of a first row of boxes of an upper half screen of an LED spherical screen, wherein the three primary color pictures P1 are point by point or separated by point of one box, and the device comprises:
removing dark noise from the picture to obtain a P2 module;
filtering the P2 to obtain a module of P3;
performing binarization processing on the P3 to obtain a module of P4;
the module is used for extracting the outlines of the light spots in the P4 and obtaining the barycenter coordinates of each outline;
collecting an image P5 with gray value of 0 and resolution of the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in P5 to 255 to obtain a module of P6;
obtaining a module of the lamp point spacing in the box body according to the P6;
acquiring preset data, wherein the preset data comprise modules of spherical screen radius, pixel resolution and splicing box body row number;
searching and arranging the presumption modules of the presumption coordinates of all the lamp points in the current box body according to the interval and the preset data;
and obtaining a mass center sequence corresponding to the light point row and column information in the box body one by one.
Based on the same inventive concept, the invention also provides a spherical display screen correction method, which comprises the following steps:
collecting three primary color pictures P1 of a first row of box bodies of a half screen on the LED spherical screen, wherein the three primary color pictures P1 are collected point by point or at intervals;
extracting centroid position coordinates of the lamp points corresponding to each presumed coordinate in the P1 by adopting the optical information extraction method;
an integration step of obtaining the brightness on each centroid position coordinate;
in the complement scanning process, in a blank state, the complement step of the lamp point corresponding to the presumed coordinates;
repeating the collecting step, the extracting step, the integrating step and the complementing step, and correcting all the boxes in a row where the boxes are positioned;
and repeating the correcting step to correct all the boxes on the spherical screen.
Further, there is provided a preferred embodiment, wherein the brightness is obtained by summing all pixels within a preset range around the centroid position coordinates of the light point corresponding to each of the row and column centroid coordinates.
Based on the same inventive concept, the invention also provides a spherical display screen correction device, which comprises:
the acquisition module is used for acquiring three primary color pictures P1 of a point by point or a point separation point of one box in a first row of box bodies of the upper half screen of the LED spherical screen;
the extraction module is used for extracting the centroid position coordinates of the lamp points corresponding to each presumed coordinate in the P1 by adopting the optical information extraction device;
the integration module is used for obtaining the brightness on each centroid position coordinate;
the complement module is in a blank state in the complement scanning process, and the lamp points corresponding to the presumed coordinates are complemented;
the correction module is used for correcting all the boxes in a row where the boxes are located by repeating the functions of the acquisition module, the extraction module, the integration module and the complementation module;
and repeating the function of the correction module, and correcting all the boxes on the spherical screen.
Based on the same inventive concept, the present invention also provides a computer storage medium storing a computer program, wherein when the program is read by a computer, the computer performs the optical information extraction method or the spherical display screen correction method.
Based on the same inventive concept, the invention also provides a computer, comprising a processor and a storage medium, wherein when the processor reads a computer program stored in the storage medium, the computer executes the optical information extraction method or the spherical display screen correction method.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
by combining the design thought of the isosceles trapezoid of the spherical screen, the lamp points in the spherical screen can be accurately positioned, and the brightness and chromaticity information of each lamp point can be accurately extracted, so that the correction of the spherical screen is realized, and the picture display uniformity and the human eye watching comfort level are improved.
The method is suitable for correcting the spherical display screen.
Drawings
FIG. 1 is a schematic view of a spherical screen formed by isosceles trapezoid boxes mentioned in the background art;
fig. 2 is a partial schematic view of a P6 medium waist trapezoid mentioned in the eleventh embodiment;
FIG. 3 is a partial view of a completed two-dimensional matrix according to the eleventh embodiment;
fig. 4 is a schematic view (4*4 light point) of a part of the P1 light source for collecting single primary color blue according to the eleventh embodiment;
Detailed Description
In order to make the advantages and benefits of the technical solution provided by the present invention more apparent, the technical solution provided by the present invention will now be described in further detail with reference to the accompanying drawings, in which:
in a first embodiment, the present embodiment provides an optical information extraction method, based on a three primary color picture P1 of a point-by-point or point-separated type in a first row of cases on an LED spherical screen, where the method includes:
removing dark noise from the picture to obtain P2;
filtering the P2 to obtain P3;
a step of binarizing the P3 to obtain P4;
extracting the outlines of the light spots in the P4, and obtaining the barycenter coordinates of each outline;
collecting an image P5 with gray value of 0 and resolution of the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in P5 to 255 to obtain P6;
obtaining the distance between the lamp points in the box body according to the P6;
collecting preset data, wherein the preset data comprise the radius of a spherical screen, pixel resolution and the number of rows of spliced boxes;
searching and arranging the presumption coordinates of all the lamp points in the current box body according to the interval and preset data;
and obtaining a mass center sequence corresponding to the row and column information of the lamp points in the box body one by one.
In the second embodiment, the optical information extraction method according to the first embodiment is further limited, and in the estimating step, the lamp coordinates of the first row and the first column are collected as estimated reference coordinates.
In the third embodiment, the optical information extraction method provided in the first embodiment is further limited, and all the estimated coordinates on the P6 are scanned by a progressive scanning method with the estimated reference coordinates as a starting point.
In a fourth embodiment, the present embodiment is further defined by the optical information extraction method provided in the first embodiment, and the method for searching and arranging centroid coordinates of all the light points in the current box to obtain centroid sequence corresponding to the light point row-column information in the box one-to-one is as follows:
and taking the pixel points with gray values of 255 in the preset range around each estimated coordinate as the centroid coordinates of the lamp points corresponding to the current estimated coordinate.
In a fifth embodiment, the present embodiment provides an optical information extraction apparatus, based on a three primary color picture P1 of a point-by-point or point-by-point in a first row of cases on an LED spherical screen, where the apparatus includes:
removing dark noise from the picture to obtain a P2 module;
filtering the P2 to obtain a module of P3;
performing binarization processing on the P3 to obtain a module of P4;
the module is used for extracting the outlines of the light spots in the P4 and obtaining the barycenter coordinates of each outline;
collecting an image P5 with gray value of 0 and resolution of the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in P5 to 255 to obtain a module of P6;
obtaining a module of the lamp point spacing in the box body according to the P6;
acquiring preset data, wherein the preset data comprise modules of spherical screen radius, pixel resolution and splicing box body row number;
searching and arranging the presumption modules of the presumption coordinates of all the lamp points in the current box body according to the interval and the preset data;
and obtaining a mass center sequence corresponding to the light point row and column information in the box body one by one.
The sixth embodiment provides a spherical display screen correction method, which includes:
collecting three primary color pictures P1 of a first row of box bodies of a half screen on the LED spherical screen, wherein the three primary color pictures P1 are collected point by point or at intervals;
an extracting step of extracting centroid position coordinates of the lamp points corresponding to each of the presumed centroid coordinates in the P1 by using the optical information extracting method provided in the first embodiment;
an integration step of obtaining the brightness on each centroid position coordinate;
in the complement scanning process, in a blank state, the complement step of the lamp point corresponding to the presumed coordinates;
repeating the collecting step, the extracting step, the integrating step and the complementing step, and correcting all the boxes in a row where the boxes are positioned;
and repeating the correcting step to correct all the boxes on the spherical screen.
In particular, the method comprises the steps of,
the method comprises the following steps:
the camera is used for shooting three primary colors of pictures (BMP or PNG formats) on one box body on the first row of the half screen on the spherical screen in a point-by-point (or point-by-point mode, a point-by-point method is shown in patent number ZL201010613815.9, LED display screen pixel lighting chromaticity information acquisition method, and the like). It is necessary here to ensure that the box imaging is all in the camera viewfinder. The number of dots is mode, and when mode=1, the dots are lighted one by one.
And carrying out LED pixel optical information extraction on the shot picture.
The optical information extraction method comprises the following steps:
s1: and (3) denoising the picture shot in the step (1) by using a threshold function in opencv for one picture P1 with a certain color, thereby removing the influence of dark noise. At this time, the picture is P2. All numbers smaller than the denoising threshold th1 in P2 are 0, and the rest values remain unchanged.
S2: p2 is filtered by using a blur function (smooth filter function) in opencv, so that the imaging 'decentration' problem caused by reasons such as lens distortion, angle distortion and the like is eliminated. The picture is P3.
S3: and (4) performing binarization processing on the P3 again by using a threshold function in opencv to obtain P4. At this time, all pixel values larger than the denoising threshold th2 in P4 are 255, and the rest are 0.
S4: spot contours in P4 were extracted using the findContours function in opencv and the centroid and radius of each contour were obtained. The centroid coordinates are stored into a centroid sequence Pos. Pos is a one-dimensional sequence and is arranged out of order.
S5: a pure black image P5 (gray values are all 0) is newly created, and the resolution of the image is the same as that of the picture format P1. In P5, the gray value of the pixel point at the coordinate position in the Pos sequence is set to 255, and an image P6 is obtained.
S6: man-made observation of the sitting position of the first row and first column pixels in the upper left corner of P6A scalar value denoted pointl0= [ x0, y0]]. The coordinate values of the first row and the second column pixels in the upper left corner in P6 are artificially observed and are marked as pointLt1= [ x1, y1 ]]. Artificially observing the coordinate value of the first column pixel of the second row at the upper left corner in P6, and recording as pointLt2= [ x2, y2]. Since the LEDs are all arranged at equal intervals, the theoretical lateral spacing dis_x=x1-x 0 between any two light points and the theoretical longitudinal spacing dis_y=y2-y 0 between any two light points. If the above three points have blind spots, the horizontal and vertical distances can be calculated according to the distances between the adjacent two points at the rest positions, and the position of the point LT0 of the upper left corner point can be calculated. The radius of the spherical screen is determined, the pixel resolution is determined, and after the number of the spliced lines is determined, the number of pixels in each line in each box is determined and known. Herein denoted as N ij Where i represents the ith bin, typically 4.ltoreq.i, and j represents the jth row in the bin. J is more than or equal to 1 and less than or equal to M. Where M is the total number of LED pixels in the box. Isosceles trapezoid base angle degree>45°
S7: and obtaining the barycenter coordinates of each lamp point in a progressive scanning mode. The horizontal scan is started with pointlt0= [ x0, y0] in P6. After scanning to the tail end, returning to the starting point of the head end of the next row.
S7.1: the first pixel point coordinate in the first row C1 of the LED spot coordinate arrangement sequence is c1_1= [ c1_1.X, c1_1.Y ] = [ x0, y0], m in the transverse direction c1_1.x-dis_x/2-1 to c1_1.x+dis_x/2+1, n traverses the pixel value Vnm in the P6 picture in the longitudinal direction c1_1.y-dis_y/2-1 to c1.y+dis_y/2+1, if Vnm is equal to 255, the current c1_1 is updated to [ m, n ], vnm is equal to 0 and then the cycle is exited, if all pixel values Vnm in the above range are not equal to 255, the point is a point of care, default coordinates c1_1=point0.
S7.2: the x direction moves to the right for scanning, and the moving step length is dis_x. Starting from the new center c1_2= [ c1_1.x+dis_x, c1_1.y ] after the movement, m traversing the pixel values Vnm in the P6 picture in the longitudinal c1_ 1.y-dis_y/2-1-c1_y+dis_y/2+1+dis_y/2+1 in the lateral c1_1.x+dis_x-dis_x/2-c1_1_x+dis_x/2+1, if Vnm equals 255, updating the current c1_2 to [ m, n ], and letting Vnm equal 0 and then exiting the loop, if all pixel values Vnm in the above ranges do not equal 255, this is the point of No. C, the coordinates c1_2= [ c1_1.x+dis_x, c1_y ], where coordinates C1_1 y ] are found by default, the second LED point centroid coordinates. And circularly executing N11/mode times according to the transverse moving dis_x method, and finding all lamp point centroid coordinates including blind points in the first row C1.
S7.3: due to the isosceles trapezoid design, after all coordinates of the first row are found, the coordinates are shifted to the left by delta x transversely by taking C1_1 as a starting point, dis_y is shifted downwards longitudinally, namely m and N return to the [ C1_1. X-delta x, C1_1.Y+dis_y ] coordinates, and the coordinates of the centroid of all the lamp points of the C2 row are found at the moment by traversing the coordinates of N12/mode in an S7.1 way.
S7.4: the starting point of each row extends to the left of the starting point of the upper row by a displacement delta x in the transverse direction, the longitudinal direction is displaced from the lower of the starting point of the upper row by dis_y, and the Nij/mode is traversed for times according to the transverse displacement dis_x in the row. Together M/mode is performed. All rows in the box, and all spot centroids of each row, are found. Denoted as C1-CM/mode. Wherein the number of coordinates on each row may not be equal. Wherein Deltax < dis_y
And calculating a point-by-point correction coefficient matrix Coe according to the extracted pixel optical information.
S1: according to the extracted C1 centroid coordinate sequence, the original image is subjected to brightness extraction in P1, and the method comprises the following steps: and taking each centroid coordinate C1_t in C1 as a center, calculating the sum of all pixels in the coordinate range of the transverse C1_ t.x-radius to C1_ t.x +radius and the longitudinal C1_ t.y-daradius to C1_ t.y +daradius in P1 to obtain the brightness value of the LED lamp point. The C1-CM/mode centroid sequences are all calculated according to the method, so that M/mode one-dimensional relative brightness sequences LP 1-LPM/mode are obtained. radius is typically between 4 and 8.
S2: and supplementing the number of the lamps in each row according to the rows to form a two-dimensional brightness matrix with the same number of the lamps in each row. The number of the bottom edge lamp points under the isosceles trapezoid is known to be at most N 1M Number of compensation points per line is (N) 1M -N 1j ) The number of the mode is M/mode row in total. According to the hardware control mode, the brightness value of the pixel point position of each row of the built-in space is compensated to be a fixed value (which can be 0). This two-dimensional luminance matrix is now denoted as Lum, which has M/mode rows, N 1M Column/mode.
And S3, when the dot separation mode is mode, a single base color needs to shoot mode images, and the S1 and S2 methods in the steps 2 and 3 are repeated to extract mode two-dimensional sequences Lum in total. These sequences are combined into a point-wise luminance sequence Lumz. This sequence has M rows, with the same number, N1M, of columns in each row.
S4: calculating a correction coefficient matrix according to Lumz, wherein calculating a point-by-point correction matrix Coe through a brightness matrix is a common method in the industry, and will not be described herein.
And uploading the point-by-point correction coefficient matrix Coe to a control system to finish the correction of the box body.
And (3) positioning and correcting the rest boxes in the same row by using the positioning information in the step (2). And then the correction of all the boxes of the row is completed.
And (5) repeating the steps 1-5 to correct the box bodies of the other rows.
In the above scheme, in step 2, the centroid coordinates of each lamp point can be calculated independently each time, and the centroid coordinates of the boxes in the same row can be positioned along the centroid coordinate sequences C1-CM. The premise of using the heart coordinates of the boxes with the same size in the same row is to ensure that the position of each box in the row of boxes is fixed when photographing, and the position of the camera is fixed.
In the above scheme, not only can the multiple boxes in the area be corrected for the boxes, but also the mode is generally between 3 and 8 in a point-separating mode when the multiple boxes are generally collected due to the limitation of the resolution of the camera.
In the above scheme, th1< th2, typically, th1 is between 5 and 10, and th2 is between 10 and 50
In the above scheme, it is considered that the horizontal stitching error < = dis_x/2 and the vertical stitching error < = dis_y/2 of different modules are greater than the above standard, the m and n traversal ranges need to be properly enlarged in step S7.
The point zero setting of the found Vnm (wherein n is the nth row m in the P6 picture and m is the mth column in the P6 picture) of 255 in the scheme is to remove the centroid coordinate which is already positioned, and the problem of identification serial caused by the fact that the module splicing height difference exists in the process of positioning the lamp points in the next row of the row is avoided.
For the spherical capping circle, a regular polyhedron formed by triangle stitching is often used, each triangle can be corrected independently, and the correction method can refer to the method described in the patent document (the special plane screen lamp point positioning method and the brightness information obtaining method) with the application number of 202310171276.5. And will not be described in detail here.
In a seventh embodiment, the spherical display screen correction method provided in the sixth embodiment is further defined, and the brightness is obtained by adding all pixels in a preset range around the centroid position coordinate of the lamp point corresponding to each of the estimated coordinates.
An eighth embodiment provides a spherical display screen correction device, including:
the acquisition module is used for acquiring three primary color pictures P1 of a point by point or a point separation point of one box in a first row of box bodies of the upper half screen of the LED spherical screen;
an extraction module for extracting centroid position coordinates of the lamp points corresponding to each of the presumed coordinates in the P1 by using the optical information extraction device provided in the fifth embodiment;
the integration module is used for obtaining the brightness on each centroid position coordinate;
the complement module is in a blank state in the complement scanning process, and the lamp points corresponding to the presumed coordinates are complemented;
the correction module is used for correcting all the boxes in a row where the boxes are located by repeating the functions of the acquisition module, the extraction module, the integration module and the complementation module;
and repeating the function of the correction module, and correcting all the boxes on the spherical screen.
Embodiment nine, the present embodiment provides a computer storage medium storing a computer program that, when read by a computer, performs the method provided in any one of embodiments one to four and six to seven.
An embodiment ten provides a computer including a processor and a storage medium, the computer executing the method provided in any one of the embodiments one to four and six to seven when the processor reads a computer program stored in the storage medium.
An eleventh embodiment, which is described with reference to fig. 2 to 4, provides a specific example of the method provided in the sixth embodiment, and is specific to:
the spherical screen with the radius of 5 meters adopts a design of 4 layers of isosceles trapezoids, and the upper hemisphere and the lower hemisphere are axisymmetric by taking the equator line. The pixel pitch was 1.50mm. After the radius of the sphere and the distance between the pixel points are determined, the resolution of the pixels in each row of the box body can be determined when the lamp panel is designed.
1. A camera is used to take a point-by-point trichromatic picture of one of the boxes on the first line on the equator of the spherical screen, i.e. mode=1 (BMP or PNG etc. format). It is necessary here to ensure that the box imaging is all in the camera viewfinder. The number of pixels at the upper bottom edge (first row) of the trapezoid box body is 424, the number of pixels at the lower bottom edge (last row) is 436, and the total number of pixels is 408. Lines 1-68 have 424 LED pixels per line, lines 69-170 have 428 LED pixels per line, lines 171-272 have 432 LED pixels per line, and lines 273-408 have 436 LED pixels per line.
2. And carrying out LED pixel optical information extraction on the shot picture.
The optical information extraction method comprises the following steps:
s1: and (3) denoising the picture shot in the step (1) by using a threshold function in opencv aiming at the picture P1 acquired by a certain primary color correction device, so as to remove the influence of dark noise. At this time, the picture is P2. All numbers smaller than the denoising threshold th1 in P2 are 0, and the rest values remain unchanged. Here thr1=5.
S2: p2 is filtered by using a blur function (smooth filter function) in opencv, so that the imaging 'decentration' problem caused by reasons such as lens distortion, angle distortion and the like is eliminated. The picture is P3.
S3: and (4) performing binarization processing on the P3 again by using a threshold function in opencv to obtain P4. At this time, all pixel values larger than the denoising threshold th2 in P4 are 255, and the rest are 0. Here th2=20.
S4: spot contours in P4 were extracted using the findContours function in opencv and the centroid and radius of each contour were obtained. The centroid coordinates are stored into a centroid sequence Pos. Pos is a one-dimensional sequence and is arranged out of order.
S5: a pure black image P5 (gray values are all 0) is newly created, and the resolution of the image is the same as that of the picture format P1. In P5, the gray value of the pixel point at the coordinate position in the Pos sequence is set to 255, and an image P6 is obtained.
S6: artificially observing the coordinate value of the first row and first column pixels in the upper left corner in P6, and marking as pointLt0= [ x0, y0]. The coordinate values of the first row and the second column pixels in the upper left corner in P6 are artificially observed and are marked as pointLt1= [ x1, y1 ]]. Artificially observing the coordinate value of the first column pixel of the second row at the upper left corner in P6, and recording as pointLt2= [ x2, y2]. Since the LEDs are all arranged at equal intervals, the theoretical lateral spacing dis_x=x1-x0=13 between any two lamps and the theoretical longitudinal spacing dis_y=y2-y0=14 between any two lamps. If the above three points have blind spots, the horizontal and vertical distances can be calculated according to the distances between the adjacent two points at the rest positions, and the position of the point LT0 of the upper left corner point can be calculated. The radius of the spherical screen is determined, the pixel resolution is determined, and after the number of the spliced lines is determined, the number of pixels in each line in each box is determined and known. Herein denoted as N ij Where i represents the ith bin, typically 4.ltoreq.i, and j represents the jth row in the bin. J is more than or equal to 1 and less than or equal to M. Where M is the total number of LED pixels in the box. Isosceles trapezoid base angle degree>45 °, m=408, i=4 in this embodiment.
S7: and obtaining the barycenter coordinates of each lamp point in a progressive scanning mode. The horizontal scan is started with pointlt0= [ x0, y0] in P6. After scanning to the tail end, returning to the starting point of the head end of the next row.
S7.1: the first pixel point coordinate in the first row C1 of the LED spot coordinate arrangement sequence is c1_1= [ c1_1.X, c1_1.Y ] = [ x0, y0], m in the transverse direction c1_1.x-dis_x/2-1 to c1_1.x+dis_x/2+1, n traverses the pixel value Vnm in the P6 picture in the longitudinal direction c1_1.y-dis_y/2-1 to c1.y+dis_y/2+1, if Vnm is equal to 255, the current c1_1 is updated to [ m, n ], vnm is equal to 0 and then the cycle is exited, if all pixel values Vnm in the above range are not equal to 255, the point is a point of care, default coordinates c1_1=point0.
S7.2: the x direction moves to the right for scanning, and the moving step length is dis_x. To move the new center c1_2= [ c1_1.x+di ]s_x,C1_1.y]As a starting point, m traverses pixel values Vnm in P6 picture in the range of horizontal c1_1.x+dis_x-dis_x/2-1 to c1_1.x+dis_x+dis_x/2+1, n in the longitudinal c1_1.y-dis_y/2-1 to c1_1.y+dis_y/2+1, if Vnm equals 255, the current c1_2 is updated to [ m, n]Let Vnm equal to 0 and then exit the loop, if all pixel values Vnm are not equal to 255 in the above range, then this is the blind spot, defaulting to this location coordinate C1_2= [ C1_1.x+dis_x, C1_1.y]At this point, the second LED point centroid coordinates are found. Loop execution of N according to the lateral shift dis_x method 11 Once, 424 times, then find all lamp spot centroid coordinates for the first row C1 including the blind spot.
S7.3: due to the isosceles trapezoid design, when the first row coordinates are all found, the first row coordinates are shifted laterally to the left by Δx with C1_1 as the starting point, and the longitudinal downward shift dis_y is performed, i.e. m, n returns to [ C1_1.X- Δx, C1_1y+dis_y]At the coordinates, traversing N in S7.1 mode 12 Once, 424 times, at this point all lamp point centroid coordinates of the C2 row are found.
3. S7.4: each row of starting points continues to deviate by delta x from the left of the starting point of the upper row in the transverse direction, deviates by dis_y from the lower of the starting point of the upper row in the longitudinal direction, and traverses N according to the transverse deviation dis_x in the row ij And twice. A total of 408 times. All rows in the box, and all spot centroids of each row, are found. Represented as C1 to C408. Lines C1-C68 with 424 centroid coordinates, lines C69-170 with 428 centroid coordinates, lines C171-C272 with 432 centroid coordinates, and lines C273-C408 with 436 centroid coordinates. Including the number of blind spots in the row. Wherein 0 is<=Δx<dis_y。
4. And calculating a point-by-point correction coefficient matrix Coe according to the extracted pixel optical information.
S1: according to the extracted C1 centroid coordinate sequence, the original image is subjected to brightness extraction in P1, and the method comprises the following steps: and taking each centroid coordinate C1_t in C1 as a center, calculating the sum of all pixels in the coordinate range of the transverse C1_ t.x-radius to C1_ t.x +radius and the longitudinal C1_ t.y-daradius to C1_ t.y +daradius in P1 to obtain the brightness value of the LED lamp point. The centroid sequences C1-C408 are all calculated according to this method, thus obtaining 408 one-dimensional relative luminance sequences LP 1-LP 408. radius=4.
S2: line-by-line complement of each line light pointAnd the number of the light spots is equal to the number of the light spots in each row to form a two-dimensional brightness matrix. The number of the bottom edge lamp points under the isosceles trapezoid is 436 at most, and the number of the compensation points in each row is 436-N 1j According to the hardware control mode, the brightness value of the empty pixel point position is compensated to be a fixed value (which can be 0). This two-dimensional luminance matrix is now denoted Lum, which has 408 rows and 436 columns. Extracting the Lum matrix of the rest color components, calculating a correction coefficient matrix of each lamp point according to the Lum matrix of red, green and blue, wherein the calculation of the point-by-point correction matrix Coe through the brightness matrix is a common method in the industry, and is not repeated here.
5. And uploading the point-by-point correction coefficient matrix Coe to a control system to finish the correction of the box body.
6. And (3) positioning and correcting the rest boxes in the same row by using the positioning information in the step (2). And then the correction of all the boxes of the row is completed.
7. And (5) repeating the steps 1-5 to correct the box bodies of the other rows. The other rows of boxes, M is different, N ij Different.
In the above scheme, in step 2, the centroid coordinates of each lamp point can be calculated independently each time, and the centroid coordinates of the boxes in the same row can be positioned along the centroid coordinate sequences C1-CM. The premise of using the heart coordinates of the boxes with the same size in the same row is to ensure that the position of each box in the row of boxes is fixed when photographing, and the position of the camera is fixed.
In the above scheme, th1< th2, typically, th1 is between 5 and 10, and th2 is between 10 and 50.
In the above scheme, it is considered that the horizontal stitching error < = dis_x/2 and the vertical stitching error < = dis_y/2 of different modules are greater than the above standard, the m and n traversal ranges need to be properly enlarged in step S7.
The point zero setting of the found Vnm (wherein n is the nth row m in the P6 picture and m is the mth column in the P6 picture) of 255 in the scheme is to remove the centroid coordinate which is already positioned, and the problem of identification serial caused by the fact that the module splicing height difference exists in the process of positioning the lamp points in the next row of the row is avoided.
For the spherical capping circle, a regular polyhedron formed by triangle stitching is often used, each triangle can be corrected independently, and the correction method can refer to the method described in the patent document (the special plane screen lamp point positioning method and the brightness information obtaining method) with the application number of 202310171276.5. And will not be described in detail here.
The correction device collects different color components for chromaticity correction, not limited to luminance correction. The light spot positioning modes are the same.
The technical solution provided by the present invention is described in further detail through several specific embodiments, so as to highlight the advantages and benefits of the technical solution provided by the present invention, however, the above specific embodiments are not intended to be limiting, and any reasonable modification and improvement, reasonable combination of embodiments, equivalent substitution, etc. of the present invention based on the spirit and principle of the present invention should be included in the scope of protection of the present invention.

Claims (10)

1. The optical information extraction method is based on a three-primary-color picture P1 of a point by point or a point separated point of one box body in a first row of box bodies of a half-screen on an LED spherical screen, and is characterized by comprising the following steps:
removing dark noise from the picture to obtain P2;
filtering the P2 to obtain P3;
a step of binarizing the P3 to obtain P4;
extracting the outlines of the light spots in the P4, and obtaining the barycenter coordinates of each outline;
collecting and generating an image P5 with the gray value of 0 and the resolution of the image P5 being the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in the image P5 to be 255 to obtain an image P6;
obtaining the distance between the lamp points in the box body according to the P6;
collecting preset data, wherein the preset data comprise the radius of a spherical screen, pixel resolution and the number of rows of spliced boxes;
searching and arranging the presumption coordinates of all the lamp points in the current box body according to the interval and preset data;
and obtaining a mass center sequence corresponding to the row and column information of the lamp points in the box body one by one.
2. The method according to claim 1, wherein in the estimating step, the lamp coordinates of the first row and the first column are collected as estimated reference coordinates.
3. The method according to claim 1, wherein all the estimated coordinates on the P6 are scanned by progressive scanning with the estimated reference coordinates as a starting point.
4. The method for extracting optical information according to claim 1, wherein the centroid coordinates of all the light points in the current box are found and arranged, and the centroid sequence mode for obtaining the one-to-one correspondence with the light point row and column information in the box is as follows:
and taking the pixel points with gray values of 255 in the preset range around each estimated coordinate as the centroid coordinates of the lamp points corresponding to the current estimated coordinate.
5. The utility model provides an optical information extraction device, based on in the first row box of LED spherical screen half-screen, the three primary colors picture P1 of point by point or separate point of one of them box, its characterized in that, the device includes:
removing dark noise from the picture to obtain a P2 module;
filtering the P2 to obtain a module of P3;
performing binarization processing on the P3 to obtain a module of P4;
the module is used for extracting the outlines of the light spots in the P4 and obtaining the barycenter coordinates of each outline;
collecting an image P5 with gray value of 0 and resolution of the same as that of P1, and setting the gray value of a pixel point positioned in the centroid coordinate in P5 to 255 to obtain a module of P6;
obtaining a module of the lamp point spacing in the box body according to the P6;
acquiring preset data, wherein the preset data comprise modules of spherical screen radius, pixel resolution and splicing box body row number;
searching and arranging the presumption modules of the presumption coordinates of all the lamp points in the current box body according to the interval and the preset data;
and obtaining a mass center sequence corresponding to the light point row and column information in the box body one by one.
6. A method for correcting a spherical display screen, the method comprising:
collecting three primary color pictures P1 of a first row of box bodies of a half screen on the LED spherical screen, wherein the three primary color pictures P1 are collected point by point or at intervals;
an extraction step of extracting centroid position coordinates of the lamp points corresponding to each of the presumed coordinates in the P1 using the optical information extraction method of claim 1;
an integration step of obtaining the brightness on each centroid position coordinate;
in the complement scanning process, in a blank state, the complement step of the lamp point corresponding to the presumed coordinates;
repeating the collecting step, the extracting step, the integrating step and the complementing step, and correcting all the boxes in a row where the boxes are positioned;
and repeating the correcting step to correct all the boxes on the spherical screen.
7. The method of claim 6, wherein the brightness is obtained by summing all pixels within a predetermined range around the centroid position coordinate of each light point.
8. Spherical display screen correction device, characterized in that it comprises:
the acquisition module is used for acquiring three primary color pictures P1 of a point by point or a point separation point of one box in a first row of box bodies of the upper half screen of the LED spherical screen;
an extraction module for extracting centroid position coordinates of the lamp points corresponding to each of the presumed coordinates in the P1 using the optical information extraction device of claim 5;
the integration module is used for obtaining the brightness on each centroid position coordinate;
the complement module is in a blank state in the complement scanning process, and the lamp points corresponding to the presumed coordinates are complemented;
the correction module is used for correcting all the boxes in a row where the boxes are located by repeating the functions of the acquisition module, the extraction module, the integration module and the complementation module;
and repeating the function of the correction module, and correcting all the boxes on the spherical screen.
9. Computer storage medium for storing a computer program, characterized in that the computer performs the method according to any one of claims 1-4 and 6-7 when the program is read by the computer.
10. A computer comprising a processor and a storage medium, characterized in that the computer performs the method of any of claims 1-4 and 6-7 when the processor reads a computer program stored in the storage medium.
CN202311020202.8A 2023-08-15 2023-08-15 Optical information extraction method and device and spherical display screen correction method and device Active CN116758163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311020202.8A CN116758163B (en) 2023-08-15 2023-08-15 Optical information extraction method and device and spherical display screen correction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311020202.8A CN116758163B (en) 2023-08-15 2023-08-15 Optical information extraction method and device and spherical display screen correction method and device

Publications (2)

Publication Number Publication Date
CN116758163A true CN116758163A (en) 2023-09-15
CN116758163B CN116758163B (en) 2023-11-14

Family

ID=87948009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311020202.8A Active CN116758163B (en) 2023-08-15 2023-08-15 Optical information extraction method and device and spherical display screen correction method and device

Country Status (1)

Country Link
CN (1) CN116758163B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2011101531A (en) * 2011-01-18 2012-07-27 Федеральное Государственное Унитарное Предприятие "Государственный Рязанский Приборный Завод" (Ru) SPHEROPERIMETER
CN102723054A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Online calibration system and online calibration method for ununiformity of LED (light-emitting diode) display screen
CN112185301A (en) * 2020-10-14 2021-01-05 西安诺瓦星云科技股份有限公司 Display device correction method and device and processor
CN112801947A (en) * 2021-01-14 2021-05-14 唐山学院 Visual detection method for dead pixel of LED display terminal
CN114927090A (en) * 2022-05-30 2022-08-19 卡莱特云科技股份有限公司 Method, device and system for sorting light points in special-shaped LED display screen
CN115035847A (en) * 2022-07-12 2022-09-09 上海旌璟信息科技有限公司 LED screen coefficient correction method, device and equipment
CN115862530A (en) * 2023-03-02 2023-03-28 长春希达电子技术有限公司 Correction method and device for special-shaped LED screen, electronic equipment and storage medium
CN115953981A (en) * 2023-02-28 2023-04-11 长春希达电子技术有限公司 Method for positioning special-shaped plane screen lamp points and method for acquiring brightness information
CN115995208A (en) * 2023-03-23 2023-04-21 长春希达电子技术有限公司 Lamp positioning method, correction method and device for spherical LED display screen
CN116312343A (en) * 2022-12-14 2023-06-23 电子科技大学 Point-to-point correction method for LED lamps in special-shaped curved surface screen

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2011101531A (en) * 2011-01-18 2012-07-27 Федеральное Государственное Унитарное Предприятие "Государственный Рязанский Приборный Завод" (Ru) SPHEROPERIMETER
CN102723054A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Online calibration system and online calibration method for ununiformity of LED (light-emitting diode) display screen
CN112185301A (en) * 2020-10-14 2021-01-05 西安诺瓦星云科技股份有限公司 Display device correction method and device and processor
CN112801947A (en) * 2021-01-14 2021-05-14 唐山学院 Visual detection method for dead pixel of LED display terminal
CN114927090A (en) * 2022-05-30 2022-08-19 卡莱特云科技股份有限公司 Method, device and system for sorting light points in special-shaped LED display screen
CN115035847A (en) * 2022-07-12 2022-09-09 上海旌璟信息科技有限公司 LED screen coefficient correction method, device and equipment
CN116312343A (en) * 2022-12-14 2023-06-23 电子科技大学 Point-to-point correction method for LED lamps in special-shaped curved surface screen
CN115953981A (en) * 2023-02-28 2023-04-11 长春希达电子技术有限公司 Method for positioning special-shaped plane screen lamp points and method for acquiring brightness information
CN115862530A (en) * 2023-03-02 2023-03-28 长春希达电子技术有限公司 Correction method and device for special-shaped LED screen, electronic equipment and storage medium
CN115995208A (en) * 2023-03-23 2023-04-21 长春希达电子技术有限公司 Lamp positioning method, correction method and device for spherical LED display screen

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINYUE MAO 等: ""Variation of LED display color affected by chromaticity and Luminance of LED display primary colors"", 《MATHEMATICAL PROBLEMS IN ENGINEERING》 *
毛新越: ""超高密度LED显示屏像素级精确采集及校正技术研究"", 《中国博士学位论文全文数据库信息科技辑》 *
王琳 等: ""LED显示屏自动校正方法简介"", 《电子产品世界》 *

Also Published As

Publication number Publication date
CN116758163B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US11503275B2 (en) Camera calibration system, target, and process
CN110085166B (en) Bright spot compensation method and device for curved screen
CN104916256B (en) Splice bright concealed wire modification method
US20120147224A1 (en) Imaging apparatus
CN112289209B (en) LED display box body and display screen abutted pixel interval brightness correction method
CN110853105B (en) Method and device for simultaneously positioning RGB (red, green and blue) sub-pixels and application
CN111882528A (en) Screen visual inspection-oriented sub-pixel sorting method and device
CN112687231B (en) Brightness and chrominance data extraction method, equipment and computer readable storage medium
CN111586273B (en) Electronic device and image acquisition method
CN114495803A (en) Mura repairing method of display panel
CN113012096B (en) Display screen sub-pixel positioning and brightness extraction method, device and storage medium
CN112581904A (en) Moire compensation method for brightness gray scale image of OLED (organic light emitting diode) screen
CN115602093A (en) Method, system and equipment for performing Demura compensation based on white picture
CN113706607B (en) Subpixel positioning method, computer equipment and device based on circular array diagram
CN116758163B (en) Optical information extraction method and device and spherical display screen correction method and device
CN114359220A (en) Method and device for acquiring optical information of special-shaped screen and nonvolatile storage medium
CN112182967B (en) Automatic photovoltaic module modeling method based on thermal imaging instrument
CN116634624B (en) Illumination control method and device for transparent screen display cabinet
CN106791498B (en) Image position method, lens array imaging method and device
CN115995208B (en) Lamp positioning method, correction method and device for spherical LED display screen
CN115953981B (en) Special-shaped plane screen lamp point positioning method and brightness information obtaining method
EP1528528A2 (en) Image display system and method
CN115831043A (en) Bright and dark line correction device and method for virtual pixel display screen
CN110874863A (en) Three-dimensional reconstruction method and system for three-dimensional reconstruction
CN106713707B (en) Lens array imaging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant