CN109544519B - Picture synthesis method based on detection device - Google Patents

Picture synthesis method based on detection device Download PDF

Info

Publication number
CN109544519B
CN109544519B CN201811326692.3A CN201811326692A CN109544519B CN 109544519 B CN109544519 B CN 109544519B CN 201811326692 A CN201811326692 A CN 201811326692A CN 109544519 B CN109544519 B CN 109544519B
Authority
CN
China
Prior art keywords
detection
actual
image
edge
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811326692.3A
Other languages
Chinese (zh)
Other versions
CN109544519A (en
Inventor
成伟华
张斌
柯美元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shunde Polytechnic
Original Assignee
Shunde Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shunde Polytechnic filed Critical Shunde Polytechnic
Priority to CN201811326692.3A priority Critical patent/CN109544519B/en
Publication of CN109544519A publication Critical patent/CN109544519A/en
Application granted granted Critical
Publication of CN109544519B publication Critical patent/CN109544519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a picture synthesis method based on a detection device, which comprises the following steps: constructing a plurality of detection images based on the size of a shooting area of a detection camera module; the OLED television is driven to play the plurality of detection images in sequence based on the image output module; traversing the OLED display screen based on a camera detection module, and sequentially acquiring a plurality of actual detection images; caching the actual detection images in a picture buffer according to a preset rule; and synthesizing the plurality of actual detection images to form a target detection image for identifying the dead pixel of the OLED television screen. The invention provides a picture synthesis method based on a detection device, which has the characteristics of high picture synthesis speed, good synthesis effect and the like.

Description

Picture synthesis method based on detection device
Technical Field
The invention relates to the field of visual detection, in particular to a picture synthesis method based on a detection device.
Background
In factory inspection of the OLED television, the number of bad points of the OLED television needs to be counted. In order to accurately count the number of dead pixels, the OLED television needs to be detected in units of pixels.
Generally, based on cost limitation, a camera capable of covering all pixel points of the OLED television at one time cannot be used for dead pixel detection, and generally, a low-pixel camera is used for detecting all pixel points of the OLED television in a traversal mode.
In the process of camera traversal detection, a picture synthesis problem can be involved.
Disclosure of Invention
Correspondingly, the invention provides a picture synthesis method based on a detection device, which comprises the following steps:
constructing a plurality of detection images based on the size of a shooting area of a detection camera module;
the OLED television is driven to play the plurality of detection images in sequence based on the image output module;
traversing the OLED display screen based on a camera detection module, and sequentially acquiring a plurality of actual detection images;
caching the actual detection images in a picture buffer according to a preset rule;
and synthesizing the plurality of actual detection images to form a target detection image for identifying the dead pixel of the OLED television screen.
The plurality of inspection images includes three main body region inspection images and three edge region inspection images.
The plurality of actual detection images include three main body region actual images and three edge region actual images respectively corresponding to the three main body region detection images and the three edge region detection images.
Synthesizing the actual main body area image based on a group of actual main body sub-images obtained by traversing the OLED display screen at the same time by the camera detection module;
and synthesizing the edge area actual image based on a group of edge actual detection sub-images obtained by traversing the OLED display screen at the same time by the camera detection module.
The main body area actual image synthesis based on a group of main body actual sub-images obtained by the camera detection module traversing the OLED display screen at the same time comprises the following steps:
gray processing the actual sub-image of the subject;
black and white processing the gray-scale processed main body actual sub-image;
removing dead pixels in the black-white processed actual sub-image of the main body based on an image filtering mode;
carrying out edge detection on the actual sub-image of the subject after image filtering to obtain edge contour information;
synthesizing two adjacent main body actual sub-images based on the contour information;
and synthesizing the subject region actual image based on a group of subject actual sub-images.
The three main body area actual images and the three edge area actual images are respectively
First subject region actual detection image: the color of the pixel point of the subregion is a measured value under a red background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
second subject region actual detection image: the color of the pixel point of the subregion is a measured value under a green background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
third subject region actual detection image: the color of the pixel point of the subregion is a measured value under a blue background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
first edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the edge pixel point is a measured value under a red background picture;
second edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the pixel point of the edge is a measured value under a green background picture;
the third edge area actually detects the image: the color of the pixel points of the subareas is a measured value in a non-color-development state, and the color of the edge pixel points is a measured value in a blue background picture.
The method for caching the multiple actual detection images in the picture buffer respectively according to a preset rule comprises the following steps:
the first main body area actual detection image, the first edge area actual detection image, the second main body area actual detection image, the second edge area actual detection image, the third main body area actual detection image and the third edge area actual detection image are sequentially stored in the picture buffer;
and the memory address difference values between the first bytes of the two adjacent actual detection images are the same.
The method for forming the target detection image for identifying the dead pixel of the OLED television screen by the aid of the plurality of actual detection images comprises the following steps of:
based on the memory address difference between the first bytes of each actual detection image, the byte information of the edge pixel points is taken out from the actual detection image of the edge area and written into the byte positions of the pixel points corresponding to the actual detection image of the corresponding main area.
Correspondingly, the invention also provides a picture synthesis device, which comprises
An image output module: the system is used for driving the OLED television to play a plurality of detection images in sequence;
detecting the camera module: the system is used for traversing the OLED display screen to acquire a plurality of actual detection images;
a picture buffer: for caching the plurality of actual detection images;
a picture synthesizer: and synthesizing the plurality of actual detection images to generate a target detection image.
The picture synthesis method based on the detection device provided by the invention combines the characteristics of the image captured by the camera module and the principle of visual identification, utilizes the image partition rule of the subareas and the edges, can efficiently realize picture synthesis, realizes the dead pixel detection of the OLED television screen on the basis, and has good practicability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a picture synthesis method based on a detection device according to an embodiment of the present invention;
fig. 2 shows a configuration diagram of a picture composition apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a flowchart of an image synthesis method based on a detection device according to an embodiment of the present invention, and the present invention provides an image synthesis method based on a detection device, including the following steps:
s101: constructing a plurality of detection images based on the size of a shooting area of a detection camera module;
in order to enable the multiple images shot by the camera module to be spliced, a certain overlapping area needs to be ensured between the adjacent images, and in order to distinguish the overlapping area, a specific detection image is needed, so that the multiple images shot by the camera module can be conveniently spliced.
Specifically, since the OLED television screen needs to detect R, G, BOLED of each pixel point thereon, in order to increase the speed of computer data processing, the embodiment of the present invention includes six detection images, which are three main area detection images and three edge area detection images, respectively;
the main body area detection graph is mainly used for detecting pixel points in a large area on the OLED television screen, and specifically, the OLED television screen is divided into a plurality of sub-areas based on the size of a shooting area of the detection camera module; and detecting that the outline of the shooting area of the camera module is positioned at the periphery of the sub-area, namely detecting that the area of the shooting area of the camera module is larger than the sub-area. Further, the area of the shooting area of the detection camera module is smaller than the total area of the two adjacent sub-areas.
And making an edge on the outer circle of each sub-area according to the required edge width, wherein the edge width is the number a of pixel points in the edge width direction and is at least 1. Since each sub-region needs to be edged, i.e. in practice, the edge width b between two sub-regions is twice the edge width a of a single sub-region, for the convenience of handling, in general, if a certain side of a sub-region is in contact with the bezel of the OLED television, the edge width of this side should be twice the width of the rest of this sub-region, i.e. set to b. From the whole OLED screen, the main body area detection graph divides the OLED screen into a plurality of sub-areas, the edge width between adjacent sub-areas is b, the edge width of the sub-area in contact with the OLED television frame on one side of the OLED television frame is b, and b is 2 a.
In the three main body region detection images, the sub-region display images are respectively a red background image, a green background image and a blue background image, and according to a more common gray scale conversion formula Y which is 0.299R +0.587G +0.114B, because the maximum gray scale value of the red background image, the maximum gray scale value of the green background image and the blue background image is 150, the minimum gray scale value is 29, and the difference value with the black gray scale value is larger, the pixel points at the edge can adopt a non-luminous mode to form the edge which is convenient to distinguish.
Besides the main body region, edge pixel points also need to be detected, and therefore, three edge region detection graphs also need to be constructed. Similarly, the sub-regions of the three edge detection images do not emit light, and the edges use red, green and blue background images, respectively.
S102: the OLED television is driven to play the plurality of detection images in sequence based on the image output module;
and after the detection images are generated, the screen of the OLED television is driven to sequentially display the plurality of detection images based on the image output module, so that the camera detection module can conveniently read the image information of the pixel points.
S103: traversing the OLED display screen based on a camera detection module, and sequentially acquiring a plurality of actual detection images;
as can be seen from step S101, the embodiment of the present invention has six detection images, and therefore, the camera detection module needs to traverse the OLED display screen at least six times to obtain the actual detection image corresponding to each detection image.
It should be noted that, in order to ensure the efficiency of traversal by the camera inspection module, the traversal trajectory of the camera inspection module is usually planned along two orthogonal axes of the screen.
And aiming at the traversing process of each detection image, generating a group of actual detection sub-images of the detection image, wherein all the actual detection sub-images in the group can form the actual detection image corresponding to the detection image after being synthesized.
Because the single detection area of the camera detection module is larger than the sub-area, the actual detection sub-image acquired by the camera detection module each time always comprises a sub-area pixel part and an edge pixel part; when the shutter speed of the camera detection module is matched with the traversal speed of the camera detection module, the two adjacent actual detection sub-images always have an overlapping area, and the two actual detection sub-images can be synthesized by processing the overlapping area.
Specifically, since the edge is disposed outside the sub-region in the embodiment of the present invention, the process of picture synthesis can be accelerated by using the edge. Specifically, the actual detection subimages need to be processed first.
One of the main body region detection map and the edge region detection map is described below as an example. Specifically, a composite image corresponding to the subject region detection image is a subject region actual detection image, and the subject region actual detection image is composed of a group of subject actual detection sub-images; and a composite image corresponding to the area edge area detection image is an edge area actual detection image, and the edge area actual image is composed of a group of edge actual detection sub-images.
Aiming at the actual detection image of the main body area, the processing of the actual detection sub-image of the main body comprises the following processes:
firstly, carrying out gray processing on an actual detection sub-image of a main body, taking values of sub-regions according to a gray formula Y of 0.299R +0.587G +0.114B according to different background colors of the sub-regions, wherein the edges do not emit light, and the gray value is 0;
then, because the gray value difference between the sub-area and the edge is larger, the sub-area and the edge in the actual sub-image of the main body can be divided through more extreme black and white processing, specifically, the gray threshold of the black and white processing is set to be 25, the gray value of a pixel point with the gray value higher than 25 is 255, and the gray value of a pixel point with the gray value lower than 25 is 0; in specific implementation, the gray-scale values 0 and 255 are replaced by 0 and 1, so that the processing effect of higher speed is achieved.
Then, because the detection of the pixel points is not involved in the picture synthesis process, in order to eliminate the influence of the possible bad points on the picture synthesis, the possible bad points can be eliminated by means of filtering the picture. Specifically, two-dimensional zero-mean discrete gaussian functions are commonly used as smoothing filters for image filtering. Two-dimensional Gaussian function of
Figure GDA0002607781620000061
Wherein A is a normalization coefficient, ux,uyIn the case of a half-Gaussian gradient, σ represents the degree of smoothness of the Gaussian curve.
In the specific operation, although the filtered actual detection sub-image of the subject is blurred, the edge feature of the actual detection sub-image of the subject is more obvious, and a possible dead pixel in the actual detection sub-image of the subject is filtered by being used as noise, which is a required result of the processing step;
then, detecting the television edge of the OLED by using a Canny operator edge detection method, wherein firstly, a convolution array is applied, and the gradient value of a pixel point in the x direction and the y direction is expressed as
Figure GDA0002607781620000062
The magnitude and direction of the magnitude of the gradient is expressed as
Figure GDA0002607781620000071
The hysteresis threshold consists of a high threshold and a low threshold, and the part of the gradient value larger than the high threshold is reserved as a pixel edge; and directly deleting the part of pixels with gradient values smaller than the low threshold value.
And after edge processing, obtaining the pixel point coordinate information of the inner contour and the outer contour of the edge of the actual detection subimage of the main body. It should be noted that, black and white processing and noise reduction are already performed on the image before Canny operator edge detection is performed, so many steps in Canny operator edge detection can be skipped, which is beneficial to increasing the image processing speed.
Specifically, only the edge pixel points and the pixel points positioned inside the edge pixel points are reserved in the embodiment of the invention, namely the finally obtained actual detection sub-image of the main body only comprises an independent sub-area and the information of the outer edge of the sub-area.
After the two adjacent actual main body detection sub-images are processed in the same way, the two actual main body detection sub-images can be synthesized by matching the shared edge. Specific matching methods include a mean absolute difference algorithm (MAD), a Sum of Absolute Differences (SAD), a sum of squared errors (SSD), a mean sum of squared errors (MSD), a normalized product correlation algorithm (NCC), a Sequential Similarity Detection Algorithm (SSDA) and a hadamard transform algorithm (SATD), and no additional description is provided in the embodiment of the invention.
Specifically, there are a plurality of different processing modes for the composition of the subject actual detection sub-image according to the difference between the traversal speed and the shutter speed of the detection camera module.
When the traversal speed of the detection camera module is slow, three or more continuous actual subject detection sub-images exist in the same shooting area, and at the moment, only the actual subject detection sub-images of the first and last subject detection sub-images of the continuous actual subject detection sub-images are reserved in order to increase the synthesis speed.
In another case, there is no common edge on two adjacent actual detection subimages, and in this case, the composition position of the adjacent actual detection subimages needs to be determined by the pixel point length.
Firstly, through the first acquired actual subject detection sub-image, based on the length (width) of the sub-region of the actual subject detection sub-image, that is, the number of pixels in the length (width) direction of the sub-region of the actual subject detection sub-image, when there is no common edge on the two adjacent actual subject detection sub-images, the distance between the edges existing on the two actual subject detection sub-images is adjusted to the length (width) of the sub-region to realize synthesis.
And by parity of reasoning, combining the actual main body detection subimages of the same group in sequence to form an actual main body area detection image. The pixel data of the synthesized actual detection image of the main body region is pixel point RGB data in each actual sub-image of the main body, wherein the RGB data of the edge pixel points is (0,0, 0).
Aiming at the edge area actual detection image, the synthesis mode of the edge actual detection sub-image is similar to the synthesis mode of the main body actual detection sub-image, and the specific steps are as follows:
firstly, carrying out gray processing on an edge actual detection sub-image, wherein the edge takes a value according to a gray formula Y of 0.299R +0.587G +0.114B according to the difference of background colors of the edge, and the gray value of a sub-area is 0;
then, because the gray value difference between the sub-area and the edge is larger, the sub-area and the edge in the edge actual detection sub-image can be divided through more extreme black and white processing, specifically, the gray threshold value of the black and white processing is set to be 25, the gray value of the pixel point with the gray value higher than 25 is 255, and the gray value of the pixel point with the gray value lower than 25 is 0; in specific implementation, the gray-scale values 0 and 255 are replaced by 0 and 1, so that the processing effect of higher speed is achieved.
Then, because the detection of the pixel points is not involved in the picture synthesis process, in order to eliminate the influence of the possible bad points on the picture synthesis, the possible bad points can be eliminated by means of filtering the picture. Specifically, two-dimensional zero-mean discrete gaussian functions are commonly used as smoothing filters for image filtering. Two-dimensional Gaussian function of
Figure GDA0002607781620000081
Wherein A is a normalization coefficient, ux,uyIn the case of a half-Gaussian gradient, σ represents the degree of smoothness of the Gaussian curve.
In the specific operation, although the filtered edge actual detection sub-image is blurred, the edge feature of the edge actual detection sub-image is more obvious, and a possibly existing dead pixel in the actual detection sub-image is filtered as noise, which is a required result of the processing step;
then, detecting the edge of the sub-area by using a Canny operator edge detection method, and firstly, applying a convolution array to express gradient values of pixel points in the x and y directions as
Figure GDA0002607781620000091
The magnitude and direction of the magnitude of the gradient is expressed as
Figure GDA0002607781620000092
The hysteresis threshold consists of a high threshold and a low threshold, and the part of the gradient value larger than the high threshold is reserved as a pixel edge; and directly deleting the part of pixels with gradient values smaller than the low threshold value.
And after edge processing, obtaining the pixel point coordinate information of the inner contour and the outer contour of the edge of the actual edge detection subimage. It should be noted that, black and white processing is already performed on the image before Canny operator edge detection is performed, so many steps in Canny operator edge detection can be skipped, which is beneficial to increasing the picture processing speed.
Because the edge actual detection sub-image is used for obtaining the color information of the edge pixel point and is used for image synthesis of the edge pixel point, only the edge coordinate information needs to be reserved.
After the two adjacent actual edge detection sub-images are processed in the same way, the two actual edge detection sub-images can be synthesized by matching the common edge.
Specifically, the problem of color synthesis is also designed after two adjacent edge actual detection sub-images are synthesized, generally, the color of the same pixel point under the same background picture is the same, and in order to avoid program errors, the color of the edge pixel point can be recorded in a copying mode. That is, the color of the same pixel point in the two actual edge detection subimages can be substituted by any color data in the two actual edge detection subimages.
Specifically, there are a plurality of different processing modes for the composition of the edge actual detection sub-image according to the difference between the traversal speed and the shutter speed of the detection camera module.
When the traversal speed of the detection camera module is slow, three or more continuous actual edge detection sub-images exist in the same shooting area, and at the moment, only the actual edge detection sub-images of the first and last continuous actual edge detection sub-images are reserved in order to increase the synthesis speed.
In other cases, there is no common edge on the two adjacent actual edge detection subimages, and in this case, the composition position of the adjacent actual edge detection subimages needs to be determined by the pixel point length.
Firstly, through the first acquired edge actual detection sub-image, based on the length (width) of the sub-area of the edge actual detection sub-image, namely the number of pixel points in the length (width) direction of the sub-area of the edge actual detection sub-image, when no common edge exists on two adjacent edge actual detection sub-images, the distance between the edges existing on the two edge actual detection sub-images is adjusted to the length (width) of the sub-area to realize synthesis.
And by analogy, combining the edge actual detection sub-images of the same group in sequence to form an edge area actual detection image. The pixel data of the synthesized edge region actual detection image is pixel point RGB data in each edge actual detection sub-image, wherein the RGB data of the sub-region pixel points is (0,0, 0).
According to the processing method, three actual detection images of the main body area and three actual detection images of the edge area are obtained, wherein the colors of the sub-areas of the three actual detection images of the main body area are red, green and blue respectively; the colors of the three subareas of the actual detection image of the edge area are respectively red, green and blue; the three actual main body area detection images and the three actual edge area detection images may have dead pixel points.
S104, caching the actual detection images in a picture buffer according to a rule;
specifically, the multiple actual detection images are stored in the image buffer according to the same pixel sequence, without compression, by taking the memory address of three bytes occupied by each pixel as a standard, and each actual detection image comprises the same number of pixels. Specifically, the pixel points of the three actual detection images of the main body region and the three actual detection images of the edge region are as follows:
first subject region actual detection image: the color of the pixel point of the subregion is a measured value under a red background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
second subject region actual detection image: the color of the pixel point of the subregion is a measured value under a green background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
third subject region actual detection image: the color of the pixel point of the subregion is a measured value under a blue background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
first edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the edge pixel point is a measured value under a red background picture;
second edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the pixel point of the edge is a measured value under a green background picture;
the third edge area actually detects the image: the color of the pixel points of the subareas is a measured value in a non-color-development state, and the color of the edge pixel points is a measured value in a blue background picture.
Further, in order to facilitate byte operations, in the picture buffer, according to the memory address increasing direction, the actual detection images are sorted into the first main body area actual detection image, the first edge area actual detection image, the second main body area actual detection image, the second edge area actual detection image, the third main body area actual detection image, and the third edge area actual detection image, and the memory address difference between the first bytes of each actual detection image is the same.
S105: and synthesizing the plurality of actual detection images to form a target detection image for identifying the dead pixel of the OLED television screen.
In order to reduce the number of image processing during detection, six actual detection images may be synthesized, specifically, a first main body region actual detection image and a first edge region actual detection image are synthesized, a second main body region actual detection image and a second edge region actual detection image are synthesized, and a third main body region actual detection image and a third edge region actual detection image are synthesized.
Since the amount of color data of the edge area actual detection image is smaller than that of the main body area actual detection image, the edge area actual detection image can be superimposed on the corresponding main body area actual detection image.
Specifically, according to the memory address difference between the first bytes of each actual detection image, byte information of edge pixel points is taken out from the actual detection image of the edge region and written into the byte positions of the pixel points corresponding to the actual detection image of the corresponding main region.
And finally, forming three target detection images for identifying the dead pixels of the OLED television screen.
Correspondingly, the embodiment of the invention also provides a picture synthesis device, which comprises
An image output module: the system is used for driving the OLED television to play a plurality of detection images in sequence;
detecting the camera module: the system is used for traversing the OLED display screen to acquire a plurality of actual detection images;
a picture buffer: for caching the plurality of actual detection images;
a picture synthesizer: and synthesizing the plurality of actual detection images to generate a target detection image.
The picture synthesis method based on the detection device provided by the embodiment of the invention combines the characteristics of the image captured by the camera module and the principle of visual identification, utilizes the image partition rules of the subareas and the edges, can efficiently realize picture synthesis, realizes the dead pixel detection of the OLED television screen on the basis, and has good practicability.
The above embodiment of the present invention provides a detailed description of a picture synthesis method based on a detection device, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (4)

1. A picture synthesis method based on a detection device is characterized by comprising the following steps:
constructing a plurality of detection images based on the size of a shooting area of a detection camera module;
the OLED television is driven to play the plurality of detection images in sequence based on the image output module;
traversing the OLED display screen based on a camera detection module, and sequentially acquiring a plurality of actual detection images;
caching the actual detection images in a picture buffer according to a preset rule;
synthesizing the plurality of actual detection images to form a target detection image for identifying the dead pixel of the OLED television screen;
the plurality of detection images comprise three main body region detection images and three edge region detection images;
the plurality of actual detection images comprise three main body region actual images and three edge region actual images which respectively correspond to the three main body region detection images and the three edge region detection images;
synthesizing the actual main body area image based on a group of actual main body sub-images obtained by traversing the OLED display screen at the same time by the camera detection module;
synthesizing the edge area actual image based on a group of edge actual detection sub-images obtained by traversing the OLED display screen at the same time by a camera detection module;
the main body area actual image synthesis based on a group of main body actual sub-images obtained by the camera detection module traversing the OLED display screen at the same time comprises the following steps:
gray processing the actual sub-image of the subject;
black and white processing the gray processed main body actual sub-image;
removing dead pixels in the black-white processed actual sub-image of the main body based on an image filtering mode taking a two-dimensional zero-mean discrete Gaussian function as a smoothing filter;
performing edge detection on the actual sub-image of the main body after image filtering based on a Canny operator edge detection method to obtain edge contour information;
synthesizing two adjacent main body actual sub-images based on the contour information;
and synthesizing the subject region actual image based on a group of subject actual sub-images.
2. The picture synthesis method based on the detection device as claimed in claim 1, wherein the three actual images of the main area and the three actual images of the edge area are respectively
First subject region actual detection image: the color of the pixel point of the subregion is a measured value under a red background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
second subject region actual detection image: the color of the pixel point of the subregion is a measured value under a green background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
third subject region actual detection image: the color of the pixel point of the subregion is a measured value under a blue background picture, and the color of the edge pixel point is a measured value under a non-color-development state;
first edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the edge pixel point is a measured value under a red background picture;
second edge area actual detection image: the color of the pixel point of the subregion is a measured value under the non-color development state, and the color of the pixel point of the edge is a measured value under a green background picture;
the third edge area actually detects the image: the color of the pixel points of the subareas is a measured value in a non-color-development state, and the color of the edge pixel points is a measured value in a blue background picture.
3. The method as claimed in claim 2, wherein the buffering the plurality of actual detection images in the image buffer according to the predetermined rule comprises the following steps:
the first main body area actual detection image, the first edge area actual detection image, the second main body area actual detection image, the second edge area actual detection image, the third main body area actual detection image and the third edge area actual detection image are sequentially stored in the picture buffer;
and the memory address difference values between the first bytes of the two adjacent actual detection images are the same.
4. The picture synthesis method based on the detection device as claimed in claim 3, wherein the step of forming the target detection image for identifying the dead pixel of the OLED TV screen from the plurality of actual detection images comprises the steps of:
based on the memory address difference between the first bytes of each actual detection image, the byte information of the edge pixel points is taken out from the actual detection image of the edge area and written into the byte positions of the pixel points corresponding to the actual detection image of the corresponding main area.
CN201811326692.3A 2018-11-08 2018-11-08 Picture synthesis method based on detection device Active CN109544519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811326692.3A CN109544519B (en) 2018-11-08 2018-11-08 Picture synthesis method based on detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811326692.3A CN109544519B (en) 2018-11-08 2018-11-08 Picture synthesis method based on detection device

Publications (2)

Publication Number Publication Date
CN109544519A CN109544519A (en) 2019-03-29
CN109544519B true CN109544519B (en) 2020-09-25

Family

ID=65845308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811326692.3A Active CN109544519B (en) 2018-11-08 2018-11-08 Picture synthesis method based on detection device

Country Status (1)

Country Link
CN (1) CN109544519B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010190A (en) * 2014-05-29 2014-08-27 东莞市信太通讯设备有限公司 Method for automatically detecting dead pixels of mobile phone screen before assembling
CN105163114A (en) * 2015-08-21 2015-12-16 深圳创维-Rgb电子有限公司 Method and system for detecting screen dead pixel based on camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342672A1 (en) * 2012-06-25 2013-12-26 Amazon Technologies, Inc. Using gaze determination with device input
CN103234476B (en) * 2013-04-01 2014-04-02 廖怀宝 Method for identifying object two-dimensional outlines
CN104200775A (en) * 2014-09-22 2014-12-10 西安电子科技大学 LED defective pixel treatment method
CN105488756B (en) * 2015-11-26 2019-03-29 努比亚技术有限公司 Picture synthetic method and device
CN107272234B (en) * 2017-07-31 2020-12-18 台州市吉吉知识产权运营有限公司 Detection method and system based on liquid crystal display test picture
CN107800980A (en) * 2017-10-19 2018-03-13 浙江大华技术股份有限公司 A kind of dead pixel points of images bearing calibration and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010190A (en) * 2014-05-29 2014-08-27 东莞市信太通讯设备有限公司 Method for automatically detecting dead pixels of mobile phone screen before assembling
CN105163114A (en) * 2015-08-21 2015-12-16 深圳创维-Rgb电子有限公司 Method and system for detecting screen dead pixel based on camera

Also Published As

Publication number Publication date
CN109544519A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN107409166B (en) Automatic generation of panning shots
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
KR102574141B1 (en) Image display method and device
EP2005387B1 (en) Constructing image panorama using frame selection
WO2019233264A1 (en) Image processing method, computer readable storage medium, and electronic device
WO2016101883A1 (en) Method for face beautification in real-time video and electronic equipment
CN110909750B (en) Image difference detection method and device, storage medium and terminal
AU2017246715A1 (en) Efficient canvas view generation from intermediate views
US8189944B1 (en) Fast edge-preserving smoothing of images
US20060152603A1 (en) White balance correction in digital camera images
US20090161982A1 (en) Restoring images
CN109190617B (en) Image rectangle detection method and device and storage medium
CN107945105A (en) Background blurring processing method, device and equipment
CN103841298A (en) Video image stabilization method based on color constant and geometry invariant features
WO2016005242A1 (en) Method and apparatus for up-scaling an image
US9330447B2 (en) Image evaluation device, image selection device, image evaluation method, recording medium, and program
TW201337835A (en) Method and apparatus for constructing image blur pyramid, and image feature extracting circuit
CN109544519B (en) Picture synthesis method based on detection device
US20210281742A1 (en) Document detections from video images
CN113160082B (en) Vignetting correction method, system, device and medium based on reference image
CN111242087B (en) Object identification method and device
EP3605450B1 (en) Image processing apparatus, image pickup apparatus, control method of image processing apparatus, and computer-program
CN114663299A (en) Training method and device suitable for image defogging model of underground coal mine
EP3998575B1 (en) Image correction device
CN112241640B (en) Graphic code determining method and device and industrial camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant