CN111627009B - Screen detection method and device and head-mounted display equipment - Google Patents

Screen detection method and device and head-mounted display equipment Download PDF

Info

Publication number
CN111627009B
CN111627009B CN202010463900.5A CN202010463900A CN111627009B CN 111627009 B CN111627009 B CN 111627009B CN 202010463900 A CN202010463900 A CN 202010463900A CN 111627009 B CN111627009 B CN 111627009B
Authority
CN
China
Prior art keywords
detection
line segment
value
edge
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010463900.5A
Other languages
Chinese (zh)
Other versions
CN111627009A (en
Inventor
宋秀峰
张一凡
田继锋
张文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Optical Technology Co Ltd
Original Assignee
Goertek Optical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Optical Technology Co Ltd filed Critical Goertek Optical Technology Co Ltd
Priority to CN202010463900.5A priority Critical patent/CN111627009B/en
Publication of CN111627009A publication Critical patent/CN111627009A/en
Application granted granted Critical
Publication of CN111627009B publication Critical patent/CN111627009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Abstract

The application discloses a screen detection method, a screen detection device and head-mounted display equipment. The screen detection method comprises the following steps: acquiring an original image of a screen to be detected, and removing interference information in the original image to obtain a detection image; extracting an edge region of the detection image according to the screen contour; detecting the positions of the lines and the line center points in the edge area by using a line extraction algorithm, taking the screen contour as a boundary, and selecting an area block where the line center points are located as a suspected defect area when the position of the line center points is within the screen contour; selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a defect index detection value according to a binarization result of the key pixel point; and comparing the defect index detection value with the corresponding defect index to obtain a detection result. The embodiment of the application solves the technical problem of low edge defect detection accuracy caused by adhesion of the edge defect and the boundary background, and meets the actual application requirements.

Description

Screen detection method and device and head-mounted display equipment
Technical Field
The application relates to the technical field of head-mounted display equipment, in particular to a screen detection method and device and head-mounted display equipment.
Background
In recent years, a head-mounted display device, such as a VR (Virtual Reality) product, is exploded, and according to market prediction, a next-generation VR head-mounted device will employ an OLED (Organic Light-Emitting Diode) screen. In order to ensure that VR products can be widely applied and have better user experience, VR screens must achieve better resolution and no dead spots and dirt exist, so that product detection is an indispensable procedure.
At present, edge defects such as edge black lines, edge scratches and the like are difficult to distinguish due to the fact that edge defects and boundary backgrounds are adhered together in detection of an OLED display screen of the VR head-mounted display device, and detection accuracy is low.
Disclosure of Invention
The present application has been made in view of the above problems, and provides a screen detection method, apparatus, and head-mounted display device that overcome or at least partially solve the above problems.
According to an aspect of the present application, there is provided a screen detection method including:
acquiring an original image of a screen to be detected, and removing interference information in the original image to obtain a detection image containing the outline of the screen;
extracting one or more edge areas of the detection image according to the screen contour;
Detecting and determining the positions of lines and line center points in the edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the positions of the line center points are within the screen contour;
selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a detection value of a defect index according to a binarization result of the key pixel point;
and comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
According to another aspect of the present application, there is provided a screen detection apparatus including:
the image acquisition unit is used for acquiring an original image of a screen to be detected, removing interference information in the original image and obtaining a detection image containing the outline of the screen;
a region determining unit for extracting one or more edge regions of the detection image according to the screen contour; determining the positions of lines and line center points in an edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the position of the line center points is within the screen contour;
The binarization unit is used for selecting key pixel points from the suspected defect area, binarizing the key pixel points based on a binarization threshold value, and obtaining a detection value of the defect index according to a binarization result of the key pixel points;
and the judging unit is used for comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
According to still another aspect of the present application, there is provided a head-mounted display apparatus including the screen detecting device as another aspect of the present application.
From the above, according to the technical scheme of the application, the original image of the screen to be detected is obtained, the interference information in the original image is removed, and the detection image containing the screen outline is obtained, and the efficiency and the accuracy of the screen detection scheme of the application are improved due to the removal of the interference information in the image. In addition, the embodiment of the application detects the line and the line center point position in the edge area by using a line extraction algorithm, extracts and screens the defect points based on binarization, calculates the detection value of defect indexes (such as contrast, area size and the like) of edge defects (such as edge black lines and edge scratches), compares the detection value with corresponding defect indexes to obtain a detection result, thus solving the technical problem that the edge defects cannot be accurately detected due to adhesion between the edge defects and the boundary background, realizing accurate and rapid detection of the screen defects, meeting the practical application requirements and improving the market competitiveness of head-mounted display equipment.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a screen detection method according to an embodiment of the present application;
FIG. 2 shows a schematic representation of an original image of one embodiment of the present application;
FIG. 3 is a schematic diagram of the binarized binary image of the original image shown in FIG. 2;
FIG. 4 is a schematic diagram showing the original image of FIG. 2 after drawing a screen outline;
FIG. 5 shows a schematic representation of a detected image of an embodiment of the present application;
FIG. 6 is a schematic diagram showing the result of edge region extraction of the detected image shown in FIG. 5 according to one embodiment of the present application;
FIG. 7a shows a schematic view of an edge region in one embodiment of the application;
FIG. 7b is a schematic view showing an edge defect contained in the edge region shown in FIG. 7 a;
FIG. 7c is an enlarged view of the location of the edge defect shown in FIG. 7 b;
FIG. 8a is a schematic diagram of suspected defect areas extracted according to an embodiment of the present application;
FIG. 8b is a schematic diagram showing the binarization of the suspected defect region shown in FIG. 8 a;
FIG. 9 shows a block diagram of a screen detection apparatus according to one embodiment of the present application;
fig. 10 shows a schematic structural diagram of a head-mounted display device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
Currently, for assembled VR (Virtual Reality) products, human eyes are generally adopted to view whether the VR is defective or not through lenses (lens), which is specifically as follows: the VR screen is controlled to output different-color graphics cards, and human eyes check whether defects exist under different graphics cards through the lenses, for example, the defects are represented as green screen dark spots, green screen bright spots, red screen dark spots, black screen bright spots, white screen dark spots, scratches, edge black lines and the like.
There is also a scheme that detection and distinction are performed through a Halcon machine vision algorithm, and as the edge defect is located at the edge of the effective area of the image, the distortion is larger, and the edge defect is adhered to the background, and the like, the Halcon vision algorithm has difficulty in distinguishing the edge defect such as an edge black line and an edge scratch, that is, the accuracy of edge defect detection is low.
In this regard, the embodiment of the application proposes to use the screen contour as a boundary line, use the line center point detected by the line extraction algorithm as a reference, extract the defect point in the suspected defect area by a binarization mode, further calculate the defect index detection value such as the area and the contrast of the suspected defect area, and judge whether the defect meets the defect standard according to the comparison result of the defect index detection value and the standard threshold.
When the line center point is detected, a Halcon vision algorithm can be used as a line extraction algorithm, and the line_gauss function is called to detect the line center point.
Fig. 1 is a schematic flow chart of a screen detection method according to an embodiment of the present application, referring to fig. 1, the screen detection method according to the embodiment of the present application includes:
step S110, an original image of a screen to be detected is obtained, interference information in the original image is removed, and a detection image containing the screen outline is obtained.
The original image here is, for example, an image acquired by photographing an image output by a screen to be detected. The interference information not only affects the accuracy of detection, but also affects the detection efficiency, so that the embodiment of the application removes the interference information in the original image.
Step S120, extracting one or more edge regions of the detected image according to the screen contour.
Step S130, detecting and determining the positions of the lines and the line center points in the edge area by using a line extraction algorithm, taking the screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the position of the line center points is within the screen contour.
Step S140, selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a detection value of the defect index according to a binarization result of the key pixel point.
Step S150, comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
As can be seen from fig. 1, in the screen detection method according to the embodiment of the present application, a screen contour is used as a boundary, a line extraction algorithm is used to detect and determine a line of an edge area and a position of a line center point, on the basis, defect points in a suspected defect area are extracted in a binarization manner, further a detection value of a defect index is calculated, and the detection value of the defect index is compared with a corresponding defect index to obtain a detection result of whether the suspected edge defect point meets a defect standard. The embodiment of the application solves the technical problem that edge defects cannot be accurately detected due to adhesion of edge defects and boundary backgrounds, is convenient to accurately calculate the defect index detection values such as contrast, area size and the like of the edge defects through a Halcon vision algorithm, further rapidly and accurately realizes screen detection, and meets the actual application requirements.
In order to better understand the implementation steps of the screen detection method according to the embodiment of the present application, the following description is provided with reference to specific embodiments.
Fig. 2 is a schematic diagram of an original image of an embodiment of the present application, and the screen detection method of the present application first obtains the original image of a screen to be detected, as shown in fig. 2.
Before screen detection is performed, the detected effective area is extracted to exclude interference. That is, the interference information in the original image is removed, resulting in a detection image. The method specifically comprises the following steps: performing binarization processing on the original image and converting the original image into a binary image; searching a screen contour on the binary image, correspondingly drawing the screen contour on the original image according to the searched screen contour, and determining the minimum circumscribed rectangle of the screen contour; and determining an effective area on the original image based on the minimum circumscribed rectangle, and extracting the effective area to obtain a detection image.
It should be understood that the process of removing the interference information in the original image is a process of extracting the effective area in the image. The manner of extracting the effective area includes, for example, the following steps:
step one, an adaptive threshold () function in a processor is called to perform binarization processing on an original image shown in fig. 2, and a binary image is obtained, and the obtained binary image is shown in fig. 3.
Step two, finding out the screen contour through contour searching on the basis of the binary image shown in fig. 3, and correspondingly drawing the screen contour on the original image shown in fig. 2 to obtain fig. 4. The position indicated by the white arrow in fig. 4 is part of the found contour contours. And (3) injection: contour lookup or contour search may be implemented using prior art techniques, such as using the OpenCV (open source computer vision library ) platform.
And thirdly, determining the minimum circumscribed rectangle of the screen outline by the screen outline, and intercepting the effective area to obtain an effective area image, namely a detection image. The width of the detected image is denoted by width, the height is denoted by height, and the detected image is shown in fig. 5.
Thus, a detection image including the screen contour is obtained. After obtaining a detection image containing the screen contour, embodiments of the present application utilize the Halcon algorithm for edge area defect detection.
Since the resolution of the image is high, if detecting defects on the whole image by using Halcon takes a long time, in order to improve the algorithm efficiency, the embodiment of the present application adopts a method of extracting edge regions, that is, extracting one or more edge regions of the detected image according to the screen contour.
Exemplary, the embodiment of the application selects a first line segment, a second line segment, a third line segment and a fourth line segment in the detection image according to the screen contour; the length of the first line segment and the length of the second line segment are equal to the width value of the detection image, the distance from the first line segment to the upper edge of the detection image and the distance from the second line segment to the upper edge of the detection image are determined according to the height value of the detection image, and the distance from the second line segment to the upper edge of the detection image is larger than the distance from the first line segment to the upper edge of the detection image; the length of the third line segment and the length of the fourth line segment are equal to the height value of the detection image, the distance from the third line segment to the left of the detection image and the distance from the fourth line segment to the left of the detection image are determined according to the width value of the detection image, and the distance from the fourth line segment to the left of the detection image is larger than the distance from the third line segment to the left of the detection image.
Specifically, a region including a region above the first line segment and a region below the upper edge of the detection image is extracted as a first edge region, a region including a region below the second line segment and a region above the lower edge of the detection image is extracted as a second edge region, a region including a third line segment left and a region including a third line segment right and a region including a fourth line segment right and a region including a third line segment right and a fourth line segment left are extracted as a fourth edge region.
Referring to fig. 6, the point in the upper left corner of the detected image in fig. 6, that is, the origin of the detected image is point0, and the coordinates of point0 are (0, 0). According to the embodiment of the application, a first line segment 610, a second line segment 620, a third line segment 630 and a fourth line segment 640 are selected from the detected image according to the screen contour, and four edge areas of the detected image, namely the upper edge area, the lower edge area, the left edge area and the right edge area, are extracted according to the four line segments so as to detect edge defects. The coordinates of the four edge regions are expressed as follows:
the first edge area area_top (0, width, height/6), where (0, 0) represents the pixel position of the top left corner vertex of the first edge area, width represents the width of the first edge area (i.e., the width of the first edge area is equal to the width value in the horizontal direction of the detected image), and height/6 represents the height of the first edge area, i.e., the height of the first edge area is equal to the height value of the detected image shown in fig. 5 divided by 6. For example, the height value of the detected image is 30 pixels, and then the height of the first edge region is equal to: 30/6=5 pixels.
Referring to fig. 6, a second edge region area_bottom (0, height 5/6, width, height/6), which is a region opposite to the upper and lower positions of the first edge region, wherein (0, height 5/6) represents the pixel position of the top left corner vertex of the second edge region, width represents the width of the second edge region (i.e., the width of the second edge region is equal to the width value of the detection image), and height/6 represents the height of the second edge region (i.e., the height of the second edge region is equal to the height value of the detection image shown in fig. 5 divided by 6).
Referring to fig. 6, a third edge region area_left (0, width/6, height), where (0, 0) represents a pixel position of an upper left corner vertex of the third edge region, width/6 represents a width of the third edge region (i.e., the width of the third edge region is equal to a width value of the detection image divided by 6), and height represents a height of the third edge region (i.e., the height of the third edge region is equal to a height value of the detection image).
Fourth edge region area_right (width 5/6,0, width/6, height). Referring to fig. 6, the fourth edge region is a region left-right opposite to the position of the third edge region, where (width 5/6, 0) represents the pixel position of the top left corner vertex of the fourth edge region, width/6 represents the width of the fourth edge region (i.e., the width of the fourth edge region is equal to the width value of the detection image divided by 6), and height represents the height of the fourth edge region (i.e., the height of the fourth edge region is equal to the height value of the detection image).
In the embodiment of the present application, the edge area is segmented according to the detected image (rectangle), and the edge area is also segmented according to the rectangle with the largest screen outline, so that the width value of the first edge area is equal to the width value of the second edge area, and the height value of the third edge area is equal to the height value of the fourth edge area.
After the four edge regions are divided, edge defect detection is performed on the edge regions. Since the defect detection process of the four edge regions is the same, only the second edge region of the aforementioned four edge regions will be described here as an example for brevity.
Referring to fig. 7a, for the edge region shown in fig. 7a, the embodiment of the present application first uses a line extraction algorithm to detect and determine the position of the line (or line) and the center point of the line in the current edge region. The line extraction algorithm is the Halcon algorithm, and specifically, a lines_gauss function of the Halcon algorithm is used to detect the line and the width of the line in the image. Under the parameters with contrast, lines_guide can return to more accurate line positions, lines in the current edge area can be detected through lines_gauss, and the pixel positions of the center points of the lines can be determined.
As a result of extraction, referring to fig. 7b and 7c, two edge defect lines are illustrated in a circle surrounded by a white dotted line in fig. 7b, and fig. 7c is a schematic view of an enlarged image around the white circle in fig. 7b, and it is clear from fig. 7c that two intersecting lines having partial overlapping exist at the screen boundary of the edge region.
Because the distances between the edge defect lines and the image boundaries are relatively short, accurate detection is generally difficult, such as the area of a defect area, the contrast of a defect point and other key indexes are difficult to obtain, and therefore the accuracy of judging the defects is low. In contrast, in the embodiment of the application, the screen contour is taken as the boundary, when the position of the line center point is within the screen contour, the area block containing a plurality of pixel points where the line center point is located is selected as the suspected defect area, and the key pixel points in the suspected defect area are further binarized, so that the detection value of the defect index is calculated.
In specific implementation, the distance between the line center point and the corresponding screen contour can be calculated according to the position of the line center point, and if the distance is greater than a preset distance threshold, the position of the line center point is considered to be within the screen contour. The calculation formula of the distance between the line center point and the corresponding screen contour is as follows:
Distance=pointPolygonTest(vec,point(x,y),true)。
considering that when the defective pixel point of the edge part is too close to the contour line of the screen, the defect is easily influenced by the boundary of the screen and cannot be distinguished, when the distance Di between the line center point and the corresponding contour line of the screen is equal to a threshold value, an area block containing a plurality of pixel points where the line center point is located is selected as a suspected defect area, and binarization processing is performed. For example, a 50×50 area where the line center point is located is selected as a suspected defect area, and an image including the suspected defect area is shown in fig. 8 a.
Next, the suspected defect area shown in fig. 8a is binarized.
The binarization process firstly selects the key pixel points, and because the probability that the pixel points within the outline of the screen in the suspected defect area belong to defects is high, the key pixel points are firstly selected to improve the detection efficiency and accuracy.
The process of selecting the key pixel points is as follows: calculating the distance from each pixel point to the corresponding screen contour for each pixel point in the suspected defect area; and taking the pixel points with the distance larger than the preset distance threshold value as key pixel points.
For example, whether a pixel point in the suspected defect area is within the screen contour is calculated by the following formula:
area in =pointPolygonTest(vec,point(x,y),false)
wherein, area in =1 indicates that in the contour line, area in = -1 represents outside the contour line.
For the pixel points within the screen contour, namely the key pixel points, the embodiment of the application obtains the gray values of the key pixel points and binarizes the key pixel points based on the binarization threshold.
Specifically, the gray value of the key pixel point is compared with the binary threshold value, and for different edge areas, the size of the suspected defect area in the edge area is the same or different, and the binary threshold value corresponding to the suspected defect area is the same or different. It should be noted that, considering the influence of the brightness of the image in each edge region, the size of the suspected defect region in each edge region may be the same (for example, 50×50 region blocks are taken), or may be different, and the corresponding binarization threshold values of the suspected defect regions may be the same or different. That is, the binary threshold is not fixed, and can be set according to the gray characteristics of the image of the suspected defect area, so that the self-adaptive capacity and the detection accuracy of the screen detection algorithm are improved.
According to the embodiment of the application, the gray value of the key pixel point is obtained, the gray value of the key pixel point with the gray value larger than or equal to the binarization threshold value is set as a first value, and the gray value of the key pixel point with the gray value smaller than the binarization threshold value is set as a second value, so that the binarization result of the key pixel point is obtained.
Illustratively, the binarization formula is as follows:
in the binarization formula, pixel_gray represents the gray value of the key pixel point, 255 is a first value, and 0 is a second value. That is, if the gray value of the key pixel is greater than or equal to the binarization threshold, the gray value is set to 255, the gray value of the key pixel is less than the binarization threshold, and the gray value is set to the second value of 0.
The result of binarizing the suspected defective region shown in fig. 8a is shown in fig. 8b, and the white portion in fig. 8b is the result of binarizing the defective point (i.e., the key pixel point) in the suspected defective region.
After obtaining a binarization result of a key pixel, according to the binarization result of the key pixel, the embodiment of the application obtains a detection value of a defect index, specifically, counts the number of key pixel with a gray value being a first value, and takes the counted number as the detection value of an area index of a defect area; or determining the corresponding pixel point of the target pixel point according to the position of the key pixel point on the detection image and the binarization result of the key pixel point, calculating a gray average value by the gray value of each corresponding pixel point, and calculating the detection value of the contrast index according to the gray average value and the gray value of each corresponding pixel point, wherein the target pixel point is the key pixel point with the gray value of the first value after binarization processing.
It should be noted that, the corresponding pixel point refers to a pixel point on the detected image corresponding to the target pixel point, and since in the embodiment of the present application, the suspected defect area is extracted and the image (as shown in fig. 8 a) including the suspected defect area is binarized, then the gray value of the pixel point in the suspected defect area is indicated on the binarized result image (as shown in fig. 8 b), and in order to find the edge defect point and further calculate the edge defect index detection value, the target pixel point (the key pixel point with the gray value being the first value) shown in fig. 8b needs to be corresponding to the detected image, that is, the pixel point corresponding to the target pixel point is located on the detected image.
In practical applications, the embodiment of the application can determine whether the suspected defect point meets the defect standard only according to the area index of the defect area, or determine whether the suspected defect point meets the defect standard only according to the contrast index.
Further, comparing the detected value of the defect indicator with the corresponding defect indicator to obtain a detection result, for example, in the application embodiment, comparing the detected value of the defect area indicator with the defect area indicator, and if the detected value of the defect area indicator meets the defect area indicator, determining that the suspected defect area is an edge defect area. Alternatively, in the embodiment of the application, the detected value of the contrast index is compared with the contrast index, and if the detected value of the contrast index satisfies the contrast index, it is determined that the suspected defective region is an edge defective region. Or in the embodiment of the application, the detection value of the area index of the defect area is compared with the area index of the defect area, the detection value of the contrast index is compared with the contrast index, and if the detection value of the area index of the defect area meets the area index of the defect area and the detection value of the contrast index meets the contrast index, the suspected defect area is determined to be the edge defect area.
For example, the number of key pixel points with the gray value being the first value is counted, for example, the counted number is 8, that is, the detected value of the area index of the defect area is 8 pixels, the detected value (8 pixels) of the area index of the defect area is compared with the area index of the defect area, if the content of the area index of the defect area is that the detected value is greater than 7 pixels and is determined to be an edge defect area, the comparison shows that the current detected value meets the area index of the defect area, and the suspected defect area detected at present is determined to be the edge defect area.
By calculating the detection value of the defect index, the detection value of the defect index is compared with the corresponding defect index, so that the technical problem that whether the edge defect cannot be accurately detected due to the fact that the edge defect and the boundary background are adhered together is solved by obtaining the detection result of whether the suspected edge defect point meets the defect standard or not, the detection value of the defect index such as the contrast, the area size and the like of the edge defect is conveniently and accurately calculated through a Halcon vision algorithm, rapid and accurate screen detection is further realized, the accuracy of screen detection is improved, and the practical application requirements are met.
Fig. 9 shows a block diagram of a screen detection apparatus according to an embodiment of the present application, referring to fig. 9, a screen detection apparatus 900 includes:
An image obtaining unit 910, configured to obtain an original image of a screen to be detected, remove interference information in the original image, and obtain a detection image including a screen contour;
a region determining unit 920 for extracting one or more edge regions of the detection image according to the screen contour; determining the positions of lines and line center points in an edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the position of the line center points is within the screen contour;
a binarization unit 930, configured to select a key pixel point from the suspected defect area, binarize the key pixel point based on a binarization threshold, and obtain a detection value of the defect index according to a binarization result of the key pixel point;
and the judging unit 940 is configured to compare the detected value of the defect indicator with the corresponding defect indicator to obtain a detection result.
In one embodiment of the present application, the area determining unit 920 is specifically configured to select a first line segment, a second line segment, a third line segment, and a fourth line segment in the detected image according to the screen contour; the length of the first line segment and the length of the second line segment are equal to the width value of the detection image, the distance from the first line segment to the upper edge of the detection image and the distance from the second line segment to the upper edge of the detection image are determined according to the height value of the detection image, and the distance from the second line segment to the upper edge of the detection image is larger than the distance from the first line segment to the upper edge of the detection image; the length of the third line segment and the length of the fourth line segment are equal to the height value of the detection image, the distance from the third line segment to the left of the detection image and the distance from the fourth line segment to the left of the detection image are determined according to the width value of the detection image, and the distance from the fourth line segment to the left of the detection image is larger than the distance from the third line segment to the left of the detection image; specifically, a region including a region above the first line segment and a region below the upper edge of the detection image is extracted as a first edge region, a region including a region below the second line segment and a region above the lower edge of the detection image is extracted as a second edge region, a region including a third line segment left and a region including a third line segment right and a region including a fourth line segment right and a region including a third line segment right and a fourth line segment left are extracted as a fourth edge region.
In one embodiment of the present application, the binarization unit 930 is specifically configured to calculate, for each pixel point in the suspected defect area, a distance from each pixel point to a corresponding screen contour; and taking the pixel points with the distance larger than the preset distance threshold value as key pixel points.
In one embodiment of the present application, the binarization unit 930 is specifically configured to obtain a gray value of a key pixel, compare the gray value of the key pixel with a binarization threshold, and for different edge regions, the size of a suspected defect region in the edge region is the same or different, and the binarization threshold corresponding to the suspected defect region is the same or different; setting the gray value of the key pixel with the gray value larger than or equal to the binarization threshold value as a first value, and setting the gray value of the key pixel with the gray value smaller than the binarization threshold value as a second value, so as to obtain the binarization result of the key pixel.
In one embodiment of the present application, the binarization unit 930 is specifically configured to count the number of key pixel points with the gray value being the first value, and take the counted number as the detection value of the area index of the defect area; or determining the corresponding pixel point of the target pixel point according to the position of the key pixel point on the detection image and the binarization result of the key pixel point, calculating a gray average value by the gray value of each corresponding pixel point, and calculating the detection value of the contrast index according to the gray average value and the gray value of each corresponding pixel point, wherein the target pixel point is the key pixel point with the gray value of the first value after binarization processing.
In one embodiment of the present application, the determining unit 940 is configured to compare the detected value of the area index of the defect area with the area index of the defect area, and determine that the suspected defect area is an edge defect area if the detected value of the area index of the defect area meets the area index of the defect area; or comparing the detection value of the contrast index with the contrast index, and if the detection value of the contrast index meets the contrast index, determining that the suspected defect area is an edge defect area; or comparing the detection value of the area index of the defect area with the area index of the defect area respectively, comparing the detection value of the contrast index with the contrast index, and determining that the suspected defect area is an edge defect area if the detection value of the area index of the defect area meets the area index of the defect area and the detection value of the contrast index meets the contrast index.
In one embodiment of the present application, the image acquisition unit 910 is specifically configured to perform binarization processing on an original image, and convert the original image into a binary image; searching a screen contour on the binary image, correspondingly drawing the screen contour on the original image according to the searched screen contour, and determining the minimum circumscribed rectangle of the screen contour; and determining an effective area on the original image based on the minimum circumscribed rectangle, and extracting the effective area to obtain a detection image.
It should be noted that, the above-mentioned screen detection apparatus can implement the steps of the screen detection method performed by the head-mounted display device provided in the foregoing embodiment, and the relevant explanation about the screen detection method is applicable to the screen detection apparatus, which is not repeated herein.
It should be noted that: fig. 10 illustrates a schematic structure of the head-mounted display device. Referring to fig. 10, at the hardware level, the head mounted display device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory, and the like. Of course, the head mounted display device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 10, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs, and the screen detection device is formed on the logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
acquiring an original image of a screen to be detected, and removing interference information in the original image to obtain a detection image containing the outline of the screen;
extracting one or more edge areas of the detection image according to the screen contour;
detecting and determining the positions of lines and line center points in the edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the positions of the line center points are within the screen contour;
selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a detection value of a defect index according to a binarization result of the key pixel point;
And comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
The functions performed by the screen detection apparatus disclosed in the embodiment of fig. 9 of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The head-mounted display device may further execute the steps executed by the screen detection method in fig. 1, and implement the functions of the screen detection method in the embodiment shown in fig. 1, which is not described herein.
The embodiment of the present application also proposes a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by a head-mounted display device including a plurality of application programs, enable the head-mounted display device to perform the screen detection method in the embodiment shown in fig. 1, and specifically for performing:
acquiring an original image of a screen to be detected, and removing interference information in the original image to obtain a detection image containing the outline of the screen; extracting one or more edge areas of the detection image according to the screen contour; detecting and determining the positions of lines and line center points in the edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the positions of the line center points are within the screen contour; selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a detection value of a defect index according to a binarization result of the key pixel point; and comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (9)

1. A screen detection method, comprising:
acquiring an original image of a screen to be detected, and removing interference information in the original image to obtain a detection image containing the outline of the screen;
extracting one or more edge areas of the detection image according to the screen contour;
detecting and determining the positions of lines and line center points in the edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the positions of the line center points are within the screen contour;
Selecting a key pixel point from the suspected defect area, binarizing the key pixel point based on a binarization threshold value, and obtaining a detection value of a defect index according to a binarization result of the key pixel point;
comparing the detection value of the defect index with the corresponding defect index to obtain a detection result;
the selecting the key pixel points in the suspected defect area comprises the following steps:
calculating the distance from each pixel point to the corresponding screen contour for each pixel point in the suspected defect area; and taking the pixel points with the distance larger than the preset distance threshold value as key pixel points.
2. The method of claim 1, wherein extracting one or more edge regions of the detection image according to a screen contour comprises:
selecting a first line segment, a second line segment, a third line segment and a fourth line segment from the detected image according to the screen contour; the length of the first line segment and the length of the second line segment are equal to the width value of the detection image, the distance from the first line segment to the upper edge of the detection image and the distance from the second line segment to the upper edge of the detection image are determined according to the height value of the detection image, and the distance from the second line segment to the upper edge of the detection image is larger than the distance from the first line segment to the upper edge of the detection image; the length of the third line segment and the length of the fourth line segment are equal to the height value of the detection image, the distance from the third line segment to the left of the detection image and the distance from the fourth line segment to the left of the detection image are determined according to the width value of the detection image, and the distance from the fourth line segment to the left of the detection image is larger than the distance from the third line segment to the left of the detection image;
The four edge regions of the detection image are extracted according to the four line segments, specifically, a region which is more than the first line segment and less than the upper edge of the detection image is extracted as a first edge region, a region which is less than the second line segment and more than the lower edge of the detection image is extracted as a second edge region, a region which is more than the left edge of the third line segment and more than the right edge of the detection image is extracted as a third edge region, and a region which is more than the right edge of the fourth line segment and more than the left edge of the detection image is extracted as a fourth edge region.
3. The method of claim 1, wherein binarizing key pixel points within the suspected defect region based on a binarization threshold comprises:
acquiring gray values of key pixel points; comparing the gray value of the key pixel point with the binary threshold value, wherein the size of the suspected defect area in the edge area is the same or different for different edge areas, and the binary threshold value corresponding to the suspected defect area is the same or different;
setting the gray value of the key pixel point with the gray value larger than or equal to the binarization threshold value as a first value, and setting the gray value of the key pixel point with the gray value smaller than the binarization threshold value as a second value to obtain a binarization result of the key pixel point.
4. The method of claim 3, wherein obtaining the detection value of the defect indicator according to the binarization result of the key pixel point comprises:
counting the number of key pixel points with gray values being first numerical values, and taking the counted number as a detection value of the area index of the defect area;
or alternatively, the first and second heat exchangers may be,
according to the position of the key pixel point on the detection image and the binarization result of the key pixel point, determining the corresponding pixel point of the target pixel point, calculating a gray average value by the gray value of each corresponding pixel point, and calculating the detection value of the contrast index according to the gray average value and the gray value of each corresponding pixel point, wherein the target pixel point is the key pixel point with the gray value of the first value after binarization processing.
5. The method of claim 4, wherein comparing the detected value of the defect indicator with the corresponding defect indicator to obtain a detection result comprises:
comparing the detection value of the area index of the defect area with the area index of the defect area, and determining that the suspected defect area is an edge defect area if the detection value of the area index of the defect area meets the area index of the defect area;
or alternatively, the first and second heat exchangers may be,
comparing the detection value of the contrast index with the contrast index, and if the detection value of the contrast index meets the contrast index, determining that the suspected defect area is an edge defect area;
Or alternatively, the first and second heat exchangers may be,
comparing the detected value of the area index of the defect area with the area index of the defect area, comparing the detected value of the contrast index with the contrast index, and determining that the suspected defect area is an edge defect area if the detected value of the area index of the defect area satisfies the area index of the defect area and the detected value of the contrast index satisfies the contrast index.
6. The method of any one of claims 1-5, wherein removing the interference information from the original image to obtain a detected image including a screen contour comprises:
performing binarization processing on the original image and converting the original image into a binary image;
searching a screen contour on the binary image, correspondingly drawing the screen contour on the original image according to the searched screen contour, and determining the minimum circumscribed rectangle of the screen contour;
and determining an effective area on the original image based on the minimum circumscribed rectangle, and extracting the effective area to obtain the detection image.
7. A screen inspection device, comprising:
the image acquisition unit is used for acquiring an original image of a screen to be detected, removing interference information in the original image and obtaining a detection image containing the outline of the screen;
A region determining unit for extracting one or more edge regions of the detection image according to the screen contour; determining the positions of lines and line center points in an edge area by using a line extraction algorithm, taking a screen contour as a boundary, and selecting an area block containing a plurality of pixel points where the line center points are located as a suspected defect area when the position of the line center points is within the screen contour;
the binarization unit is used for selecting key pixel points from the suspected defect area, binarizing the key pixel points based on a binarization threshold value, and obtaining a detection value of the defect index according to a binarization result of the key pixel points; the method is particularly used for calculating the distance from each pixel point to the corresponding screen contour for each pixel point in the suspected defect area; taking the pixel points with the distance larger than a preset distance threshold value as key pixel points;
and the judging unit is used for comparing the detection value of the defect index with the corresponding defect index to obtain a detection result.
8. The apparatus of claim 7, wherein,
the area determining unit is specifically configured to select a first line segment, a second line segment, a third line segment and a fourth line segment in the detected image according to the screen contour; the length of the first line segment and the length of the second line segment are equal to the width value of the detection image, the distance from the first line segment to the upper edge of the detection image and the distance from the second line segment to the upper edge of the detection image are determined according to the height value of the detection image, and the distance from the second line segment to the upper edge of the detection image is larger than the distance from the first line segment to the upper edge of the detection image; the length of the third line segment and the length of the fourth line segment are equal to the height value of the detection image, the distance from the third line segment to the left of the detection image and the distance from the fourth line segment to the left of the detection image are determined according to the width value of the detection image, and the distance from the fourth line segment to the left of the detection image is larger than the distance from the third line segment to the left of the detection image;
The four edge regions of the detection image are extracted according to the four line segments, specifically, a region which is more than the first line segment and less than the upper edge of the detection image is extracted as a first edge region, a region which is less than the second line segment and more than the lower edge of the detection image is extracted as a second edge region, a region which is more than the left edge of the third line segment and more than the right edge of the detection image is extracted as a third edge region, and a region which is more than the right edge of the fourth line segment and more than the left edge of the detection image is extracted as a fourth edge region.
9. A head-mounted display device, characterized in that it comprises the screen detection apparatus according to claim 7 or 8.
CN202010463900.5A 2020-05-27 2020-05-27 Screen detection method and device and head-mounted display equipment Active CN111627009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010463900.5A CN111627009B (en) 2020-05-27 2020-05-27 Screen detection method and device and head-mounted display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010463900.5A CN111627009B (en) 2020-05-27 2020-05-27 Screen detection method and device and head-mounted display equipment

Publications (2)

Publication Number Publication Date
CN111627009A CN111627009A (en) 2020-09-04
CN111627009B true CN111627009B (en) 2023-10-20

Family

ID=72271313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010463900.5A Active CN111627009B (en) 2020-05-27 2020-05-27 Screen detection method and device and head-mounted display equipment

Country Status (1)

Country Link
CN (1) CN111627009B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932571B (en) * 2020-09-25 2021-01-22 歌尔股份有限公司 Image boundary identification method and device and computer readable storage medium
CN112348773B (en) * 2020-09-28 2022-09-23 歌尔股份有限公司 Screen defect detection method and device and electronic equipment
CN113485658A (en) * 2021-06-11 2021-10-08 合肥联宝信息技术有限公司 Screen boundary switching method and device, storage medium and electronic equipment
CN113643233A (en) * 2021-07-01 2021-11-12 深圳市格灵精睿视觉有限公司 Oily coating detection method, system and equipment and computer readable storage medium
CN113533375A (en) * 2021-08-26 2021-10-22 惠州市特创电子科技股份有限公司 Forward and reverse scanning modeling detection method for printed circuit board
CN113870256B (en) * 2021-12-01 2022-02-11 成都数之联科技有限公司 PCB defect evaluation method, device, equipment and medium
CN113870754B (en) * 2021-12-03 2022-03-11 中导光电设备股份有限公司 Method and system for judging defects of panel detection electronic signals
CN113870266B (en) * 2021-12-03 2022-03-11 中导光电设备股份有限公司 Method and system for judging authenticity of line defect based on TFT-LCD
CN115359039B (en) * 2022-08-08 2023-04-14 海的电子科技(苏州)有限公司 Target picture generation method and device and display screen maintenance pixel point identification method
CN115187593B (en) * 2022-09-08 2022-12-16 苏州华兴源创科技股份有限公司 Screen defect detection method and device
CN115937109A (en) * 2022-11-17 2023-04-07 创新奇智(上海)科技有限公司 Silicon wafer defect detection method and device, electronic equipment and storage medium
CN116071347B (en) * 2023-03-01 2023-07-28 山西戴德测控技术股份有限公司 Wear degree determination method, device, system and storage medium
CN116049863B (en) * 2023-03-28 2023-06-13 东莞锐视光电科技有限公司 System, method and application for generating stripe light by LCD display screen
CN116051564B (en) * 2023-04-02 2023-06-16 广东仁懋电子有限公司 Chip packaging defect detection method and system
CN117314826A (en) * 2023-08-28 2023-12-29 广州千筱母婴用品有限公司 Performance detection method of display screen
CN116894939B (en) * 2023-09-11 2024-01-09 深圳精智达技术股份有限公司 Regional positioning method and device for special-shaped screen, electronic equipment and storage medium
CN117036350B (en) * 2023-10-08 2023-12-15 保定来福汽车照明集团沧州有限公司 Defect detection method, device, terminal and storage medium for metal lamp holder welding mud
CN117351011B (en) * 2023-12-04 2024-03-12 歌尔股份有限公司 Screen defect detection method, apparatus, and readable storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365718A (en) * 2001-06-04 2002-12-18 Hitachi Ltd Multidisplay device and regulating method for the same
CN1928535A (en) * 2006-09-07 2007-03-14 哈尔滨工业大学 Machine vision based LCD spot flaw detection method and system
JP2007172397A (en) * 2005-12-22 2007-07-05 Seiko Epson Corp Edge gradient detection method, stain defect detection method, edge gradient detection device and stain defect detection device
CN104502359A (en) * 2014-12-10 2015-04-08 天津普达软件技术有限公司 Method for accurately detecting bottle blank and bottle opening defects
CN106408583A (en) * 2016-08-25 2017-02-15 凌云光技术集团有限责任公司 Multi-edge defect detecting method and device
CN107255641A (en) * 2017-06-06 2017-10-17 西安理工大学 A kind of method that Machine Vision Detection is carried out for GRIN Lens surface defect
CN107330433A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 Image processing method and device
CN108230321A (en) * 2018-01-19 2018-06-29 深圳市亿图视觉自动化技术有限公司 Defect inspection method and device
CN108460799A (en) * 2018-01-26 2018-08-28 中国地质大学(武汉) A kind of Step wise approximation sub-pix image position method and system
CN108680343A (en) * 2018-05-22 2018-10-19 歌尔股份有限公司 A kind of flexible screen detection method and detection device
CN109360203A (en) * 2018-10-30 2019-02-19 京东方科技集团股份有限公司 Method for registering images, image registration device and storage medium
CN110018169A (en) * 2019-04-10 2019-07-16 珠海格力智能装备有限公司 A kind of detection method and device, storage medium and the processor of silk-screen lines
KR102031102B1 (en) * 2019-05-28 2019-10-11 (주)케이테크놀로지 Inspection method for corner seal line of flat-panel display cell
CN110445921A (en) * 2019-09-06 2019-11-12 东北大学 A kind of Mobile phone screen backlight foreign matter defect diagnostic method and device based on machine vision
CN110533657A (en) * 2019-09-04 2019-12-03 苏州精速智能科技有限公司 A kind of liquid crystal display appearance detecting method
CN110766684A (en) * 2019-10-30 2020-02-07 江南大学 Stator surface defect detection system and detection method based on machine vision
CN110907457A (en) * 2019-12-19 2020-03-24 长安大学 Aggregate morphological feature detection system and method based on 3D point cloud data
CN110986865A (en) * 2019-12-25 2020-04-10 苏州伟信奥图智能科技有限公司 Ring defect detection method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365718A (en) * 2001-06-04 2002-12-18 Hitachi Ltd Multidisplay device and regulating method for the same
JP2007172397A (en) * 2005-12-22 2007-07-05 Seiko Epson Corp Edge gradient detection method, stain defect detection method, edge gradient detection device and stain defect detection device
CN1928535A (en) * 2006-09-07 2007-03-14 哈尔滨工业大学 Machine vision based LCD spot flaw detection method and system
CN104502359A (en) * 2014-12-10 2015-04-08 天津普达软件技术有限公司 Method for accurately detecting bottle blank and bottle opening defects
CN106408583A (en) * 2016-08-25 2017-02-15 凌云光技术集团有限责任公司 Multi-edge defect detecting method and device
CN107330433A (en) * 2017-05-17 2017-11-07 北京捷通华声科技股份有限公司 Image processing method and device
CN107255641A (en) * 2017-06-06 2017-10-17 西安理工大学 A kind of method that Machine Vision Detection is carried out for GRIN Lens surface defect
CN108230321A (en) * 2018-01-19 2018-06-29 深圳市亿图视觉自动化技术有限公司 Defect inspection method and device
CN108460799A (en) * 2018-01-26 2018-08-28 中国地质大学(武汉) A kind of Step wise approximation sub-pix image position method and system
CN108680343A (en) * 2018-05-22 2018-10-19 歌尔股份有限公司 A kind of flexible screen detection method and detection device
CN109360203A (en) * 2018-10-30 2019-02-19 京东方科技集团股份有限公司 Method for registering images, image registration device and storage medium
CN110018169A (en) * 2019-04-10 2019-07-16 珠海格力智能装备有限公司 A kind of detection method and device, storage medium and the processor of silk-screen lines
KR102031102B1 (en) * 2019-05-28 2019-10-11 (주)케이테크놀로지 Inspection method for corner seal line of flat-panel display cell
CN110533657A (en) * 2019-09-04 2019-12-03 苏州精速智能科技有限公司 A kind of liquid crystal display appearance detecting method
CN110445921A (en) * 2019-09-06 2019-11-12 东北大学 A kind of Mobile phone screen backlight foreign matter defect diagnostic method and device based on machine vision
CN110766684A (en) * 2019-10-30 2020-02-07 江南大学 Stator surface defect detection system and detection method based on machine vision
CN110907457A (en) * 2019-12-19 2020-03-24 长安大学 Aggregate morphological feature detection system and method based on 3D point cloud data
CN110986865A (en) * 2019-12-25 2020-04-10 苏州伟信奥图智能科技有限公司 Ring defect detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Scale Insensitive and Focus Driven Mobile Screen Defect Detection in Industry;Jie Let et al;《Neurocomputing》;20180717;第72-81页 *
手机屏幕图像缺陷的实时检测;刘波;《计算机工程与应用》;20110127;第46卷(第28期);第77-79页 *

Also Published As

Publication number Publication date
CN111627009A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111627009B (en) Screen detection method and device and head-mounted display equipment
CN111612781B (en) Screen defect detection method and device and head-mounted display equipment
WO2022062812A1 (en) Screen defect detection method, apparatus, and electronic device
CN111862195B (en) Light spot detection method and device, terminal and storage medium
US9311533B2 (en) Device and method for detecting the presence of a logo in a picture
CN111523438B (en) Living body identification method, terminal equipment and electronic equipment
CN111882520B (en) Screen defect detection method and device and head-mounted display equipment
CN113160161B (en) Method and device for detecting defects at edge of target
CN110197180B (en) Character defect detection method, device and equipment
CN111046862B (en) Character segmentation method, device and computer readable storage medium
CN111325717B (en) Mobile phone defect position identification method and equipment
US7333656B2 (en) Image processing method and image processing apparatus
CN109191516B (en) Rotation correction method and device of structured light module and readable storage medium
TWI514327B (en) Method and system for object detection and tracking
CN111079793A (en) Icon similarity determining method and electronic equipment
CN108734161B (en) Method, device and equipment for identifying prefix number area and storage medium
CN112634201A (en) Target detection method and device and electronic equipment
CN113034447A (en) Edge defect detection method and device
CN106204616B (en) Method and device for identifying currency value of Iran paper money
WO2024016686A1 (en) Corner detection method and apparatus
CN115222653B (en) Test method and device
CN112511725B (en) Automatic identification method and device for endoscope ring, storage medium and terminal
CN113378847A (en) Character segmentation method, system, computer device and storage medium
CN117351011B (en) Screen defect detection method, apparatus, and readable storage medium
WO2015102945A1 (en) Image processing-based method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221123

Address after: 261061 workshop 1, phase III, Geer Photoelectric Industrial Park, 3999 Huixian Road, Yongchun community, Qingchi street, high tech Zone, Weifang City, Shandong Province

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 261031 No. 268 Dongfang Road, hi tech Industrial Development Zone, Shandong, Weifang

Applicant before: GOERTEK Inc.

GR01 Patent grant
GR01 Patent grant