CN117710346A - Method, device and system for extracting area for detecting display module - Google Patents

Method, device and system for extracting area for detecting display module Download PDF

Info

Publication number
CN117710346A
CN117710346A CN202311771008.3A CN202311771008A CN117710346A CN 117710346 A CN117710346 A CN 117710346A CN 202311771008 A CN202311771008 A CN 202311771008A CN 117710346 A CN117710346 A CN 117710346A
Authority
CN
China
Prior art keywords
pixels
area
pixel
color value
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311771008.3A
Other languages
Chinese (zh)
Inventor
熊朝威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenshunxin Technology Co ltd
Original Assignee
Shenzhen Shenshunxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenshunxin Technology Co ltd filed Critical Shenzhen Shenshunxin Technology Co ltd
Priority to CN202311771008.3A priority Critical patent/CN117710346A/en
Publication of CN117710346A publication Critical patent/CN117710346A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of computers, in particular to a method, a device and a system for extracting a region for detecting a display module, wherein the method can roughly divide boundary lines of a display region and boundary lines of regions other than the display region according to a pixel comparison mode, then carry out color value difference analysis on each type of sections such as a superposition section, a crossing section, a spacing section and the like of two boundary lines one by one, the analyzed layer is fine to the layer of a pixel point, and finally, the line segments determined after analysis are connected to obtain the boundary lines of the required display region; the method ensures that the finally determined boundary line has high accuracy, and the dividing area is not required to be redefined in the field of view of the camera when the products with different specifications are replaced, and the extraction efficiency of the display area is further improved along with the detection, so that the detection efficiency of the display module is further improved.

Description

Method, device and system for extracting area for detecting display module
Technical Field
The present invention relates to the field of computers, and in particular, to a method, an apparatus, and a system for extracting a region for detecting a display module.
Background
The display module is formed by stacking a plurality of layers of structures, and the display effect of the display module is required to be detected after the finished product is produced, so that the display defect of the display module can be found in advance;
the existing detection mode of the display module is mainly that an image of the display module is shot when the display module is displayed, and image analysis is carried out on a display area of the display module in the image; before image analysis, the local image where the display area is located is required to be segmented from the image so as to ensure the effect of image analysis; the current segmentation mode mainly comprises the steps that the relative positions of a default camera and a display module are fixed, and a set area in the visual field of the camera is directly used as a display area and is segmented; however, when different products are replaced, the size of the display module is changed, so that a dividing area needs to be defined again, and the sampling detection of various products is inconvenient; in addition, the above method is highly dependent on the moving accuracy of the camera and the positioning and moving accuracy of the jig, too many variables, and it is difficult to ensure the dividing accuracy in the repeated dividing of the once defined region.
Disclosure of Invention
Accordingly, it is desirable to provide a method, apparatus and system for extracting a region for detecting a display module.
The embodiment of the invention is realized in such a way that the method for extracting the area for detecting the display module comprises the following steps:
s1: the camera is moved to the position above the center of the display module, the display module is controlled to display set color light, and a first image is acquired;
s2: determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
s3: determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line;
s4: identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
s5: determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments;
s6: determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
s7: taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line;
s8: and dividing the first image along the third boundary line, wherein the obtained area within the third boundary line is the display area of the display module.
In one embodiment, the present invention provides a display module detection area extracting device, including:
the first processing module is used for moving the camera to the position above the center of the display module, controlling the display module to display set color light and collecting a first image;
the second processing module is used for determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
the third processing module is used for determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is located outside the second boundary line;
the fourth processing module is used for identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
a fifth processing module for determining a first line segment at the intersection region according to the color value differences of pixels in the intersection region formed by the intersection segments;
a sixth processing module for determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
a seventh processing module, configured to use a boundary line formed by connecting the overlapping section, the first line segment, and the second line segment as a third boundary line;
and the eighth processing module is used for dividing the first image along the third boundary line, and the obtained area within the third boundary line is the display area of the display module.
In one embodiment, the present invention provides a display module inspection area extraction system for extracting an inspection area of a display module, the display module inspection area extraction system including:
the movable camera is used for collecting images of the display module;
and the computer equipment is connected with the movable camera and the display module and is used for executing the region extraction method for detecting the display module.
The invention provides a method, a device and a system for extracting a region for detecting a display module, wherein the method comprises the steps of moving a camera to the position above the center of the display module, controlling the display module to display set color light, and collecting a first image; determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel; determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line; identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line; determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments; determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments; taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line; dividing the first image along a third boundary line, wherein the obtained area within the third boundary line is the display area of the display module; according to the method, the boundary line of the display area and the boundary line of the area outside the display area can be roughly divided according to the pixel comparison mode, then all types of sections such as the overlapping section, the crossing section and the interval section of the two boundary lines are analyzed one by one, and the analyzed layer is fine to the pixel point layer, so that the finally determined boundary line has high accuracy; in addition, when the method is used for replacing products with different specifications, the resetting is not needed, the dividing areas are not needed to be redefined in the view field of the camera, and the extraction efficiency of the display area is further improved along with the detection, so that the detection efficiency of the display module is further improved.
Drawings
FIG. 1 is a first flowchart of a display module detecting area extracting method according to an embodiment;
FIG. 2 is a second flowchart of a display module detecting area extracting method according to an embodiment;
FIG. 3 is a schematic diagram of image acquisition of a region extraction method for detecting a display module according to an embodiment;
FIG. 4 is a schematic diagram illustrating a second set region pointing method of the display module detecting region extracting method according to one embodiment;
FIG. 5 is a block flow diagram of a display module detecting region extracting device according to an embodiment;
FIG. 6 is a schematic diagram showing the components of a system for extracting a region for detecting a display module according to an embodiment;
FIG. 7 is a block diagram of the internal architecture of a computer device in one embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another element. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of this disclosure.
As shown in fig. 1-2, in one embodiment, a method for extracting a region for detecting a display module is provided, where the method includes:
s1: the camera is moved to the position above the center of the display module, the display module is controlled to display set color light, and a first image is acquired;
s2: determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
s3: determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line;
s4: identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
s5: determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments;
s6: determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
s7: taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line;
s8: and dividing the first image along the third boundary line, wherein the obtained area within the third boundary line is the display area of the display module.
In this embodiment, the method is executed in a computer device, where the computer device may be an independent physical server or terminal, or may be a server cluster formed by multiple physical servers, or may be a cloud server that provides basic cloud computing services such as a cloud server, a cloud database, a cloud storage, a CDN, and the like; the application gathers display module assembly's image through the camera, and the camera is movable camera, can remove to display module assembly's the position of display area top under computer equipment's control to gather first image.
In this embodiment, the center position of the display module is not necessarily the center of the display area, and only a region with a set size is required in the middle of the display area, and the region may be a region with a size of 100×100 pixels; the set color light is any one of red light, green light and blue light, wherein the red light, the green light and the blue light are all pure color light, and ideally, a pixel only has a color value on a color channel corresponding to the set color light when the set color light is displayed; the first image includes a display module and a range around the display module, such as a table top for placing the display module, where the table top may be black or other colors with larger difference from the set color; the first setting area is an area for determining a source pixel color value of the first pixel, the source pixel color value is a reference color value, whether the pixel is the first pixel can be judged by comparing the difference between the color value of the first pixel and the color value of the source pixel, the first setting area is an area set in a color light range in the first image, when the camera is positioned at the center of the display module, the display module is positioned at the middle position of the first image, the first pixel determined according to the color value of the pixel in the first setting area is a pixel of the display area, the second setting area is an area set between the display area and the edge in the first image, and the second pixel determined according to the color value of the pixel in the second setting area is a pixel of a surrounding area, namely a pixel of the background area; therefore, the first boundary line can be regarded as the approximate outer boundary line of the display area, the second boundary line is the inner boundary line of the surroundings (background area) of the display area, and the first boundary line and the second boundary line have overlapping line segments, namely overlapping segments; however, as shown in fig. 3, since the display color light is scattered, the pixels in the first image, which belong to the range other than the set color light, may also display the color value of the set color light, and the factors of the reflection of the camera itself may cause the overlapping intersection area between the area composed of the first pixels and the area composed of the second pixels, and the intersection section is a line segment surrounding the intersection area, which includes a first intersection line segment belonging to the first boundary line and a second intersection line segment belonging to the second boundary line; similarly, there may be a spacing region in the first image, where the pixels in the spacing region belong to neither the first pixel nor the second pixel, and the spacing segment is a segment that encloses the spacing region, and includes a first spacing segment that belongs to the first boundary line, and a second spacing segment that belongs to the second boundary line; for the intersecting section, the color value differences of the pixels in the intersecting region are compared, and since the color value differences of the set color light and the surrounding region are large, the separation line (first line segment) between the pixels with large color value differences of the pixels in the intersecting region can be regarded as a boundary line finally determined by the intersecting region, and likewise, a second line segment can be determined in the interval region, then the first line segment, the second line segment and the overlapping section are connected, so that a third boundary line of the display region with high accuracy can be obtained, and the first image is divided according to the third boundary line, so that an accurate display region image can be obtained.
In the application, the boundary line of the display area and the boundary line of the area other than the display area can be roughly divided according to the pixel comparison mode, then the overlapping section, the crossing section, the interval section and other sections of the two boundary lines are analyzed one by one, and the analyzed layer is fine to the pixel layer, so that the finally determined boundary line has high accuracy; in addition, when the method is used for replacing products with different specifications, the resetting is not needed, the dividing areas are not needed to be redefined in the view field of the camera, and the extraction efficiency of the display area is further improved along with the detection, so that the detection efficiency of the display module is further improved.
As a preferred embodiment, the first setting area is located at a middle setting position of the first image, and an area of the first setting area is a setting area;
the determining each first pixel on the first image from the color value of the pixel in the first set area in the first image comprises:
s21: identifying a mode of color values of each pixel in the first setting area to obtain a first mode color value;
s21: determining a first tolerance from the first mode color value;
s22: pixels in the first image having color values within a first tolerance are taken as first pixels.
The second setting areas are arranged in a plurality of positions, which are set by the edges of the first image, and the area of each second setting area is a setting area;
the determining each second pixel on the first image from the color value of the pixel in the second set area in the first image comprises:
s31: identifying a mode of color values of each pixel in each second setting area to obtain a second mode color value;
s32: determining a second tolerance according to the second color value, wherein the second tolerance is not overlapped with the color value range of the color channel corresponding to the set color light, and the set color light is any one color light of red light, green light and blue light;
s33: and taking pixels with color values within the range of the second tolerance in the first image as second pixels.
In this embodiment, the set area may be an area occupied by 3*3 pixels, and the area of the middle set area is larger than the set area, for example, an area occupied by 100×100 pixels; if the first mode color value is (R (250), G (5), B (5)), a first tolerance (R (245, 255), G (0, 10), B (0, 10)) may be set with the value of each channel of the first mode color value as the median value; the setting mode of the second tolerance is the same as that of the first tolerance; as shown in fig. 4, each of the second setting regions may be uniformly disposed at the middle of the peripheral region of the display region; the second mode color values are selected from the respective second set regions.
As a preferred embodiment, the intersecting segment comprises a first intersecting line segment belonging to a first boundary line and a second intersecting line segment belonging to a second boundary line; the first intersecting line segment and the second intersecting line segment intersect at two intersecting points; the crossing area is an area surrounded by the first crossing line segment and the second crossing line segment, and is positioned in the first boundary line; the determining the first line segment in the intersection area according to the color value difference of the pixels in the intersection area where the intersection line segment is located comprises:
s51: taking one intersection as a starting intersection, and taking the other intersection as a terminating intersection;
s52: taking two pixels adjacent to the initial intersection point and within the intersection region as basic pixels, wherein a straight line where a dividing line segment between the two basic pixels is located passes through the initial intersection point, and the two basic pixels are located at the first row of pixels of the intersection region;
s53: taking pixels adjacent to the basic pixels in the next row of pixels in the direction of the termination intersection as comparison pixels, wherein the comparison pixels are positioned in the intersection area;
s54: calculating a color value difference between every two adjacent comparison pixels;
s55: taking a separation line segment between two adjacent comparison pixels with the largest color value difference as a first sub-line segment;
s56: and taking two adjacent comparison pixels with the largest color value difference as basic pixels, and executing the steps S53 to S56 until the color value difference comparison of each row of pixels in the intersection area is completed, so as to obtain a plurality of connected first sub-line segments, wherein each first sub-line segment forms a first line segment for connecting the initial intersection point and the termination intersection point.
Calculating the color value difference between any two adjacent comparison pixels includes:
s541: taking a straight line where a separation line segment between two comparison pixels is located as a first straight line, and calculating the average value of color values of all pixels which are adjacent to the first straight line and are in the intersection area;
s542: the mean color difference of the two comparison pixels is calculated according to the following formula:
wherein,for mean color difference, R is the color value of the red channel of the comparison pixel, < >>For the color value corresponding to the red channel in the average value of the color values,/->For comparing the color values of the green channel of the pixel +.>For the color value corresponding to the green channel in the average value of the color values,/->For comparing the color values of the blue channel of the pixel, < >>The color value corresponding to the blue channel in the average value of the color values;
s543: calculating the color value difference between two compared pixels according to the following formula:
-/>
wherein,for colour difference>Color difference for the mean value of the first comparison pixel, < >>The second comparison pixel is the mean color difference.
The interval section comprises a first interval line section belonging to a first boundary line and a second interval line section belonging to a second boundary line; the first interval line segment and the second interval line segment are spaced at two interval points, the interval area is an area surrounded by the first interval line segment and the second interval line segment, and the interval area is outside the first boundary line; the determining the second line segment in the interval region according to the color value difference of the pixels in the interval region where the interval line segment is located comprises:
s61: taking one interval point as a starting interval point, and taking the other interval point as a stopping interval point;
s62: taking two pixels adjacent to the initial interval point and within the interval area as basic pixels, wherein a straight line where a separation line segment between the two basic pixels is located passes through the initial interval point, and the two basic pixels are located at the first row of pixels of the interval area;
s63: taking pixels adjacent to the basic pixels in the next row of pixels in the direction of the termination interval point as comparison pixels, wherein the comparison pixels are positioned in the interval region;
s64: calculating a color value difference between every two adjacent comparison pixels;
s65: taking a separation line segment between two adjacent comparison pixels with the largest color value difference as a second sub-line segment;
s66: taking two adjacent comparison pixels with the largest color value difference as basic pixels, and executing the steps S63 to S66 until the color value difference comparison of each row of pixels in the interval region is completed, so as to obtain a plurality of connected second sub-line segments, wherein each second sub-line segment forms a second line segment for connecting the initial cross point and the final cross point;
wherein, for step S64, calculating the color value difference between any two adjacent comparison pixels includes:
taking the straight line where the separation line segment between the two comparison pixels is located as a second straight line, calculating the average value of the color values of all pixels adjacent to the second straight line and in the present interval region, and executing steps S542 to S543.
In this embodiment, the interval line in this application is essentially an interval line between pixels, so for any intersection/interval point, there are 4 pixels adjacent to the point, the point is at the center of four pixels, and since the boundary line diverges/separates at the point, only two points of the four points adjacent to the point are within the diverged area/interval area, and the two points are taken as the base pixels for color value comparison; when the color value difference is calculated for the pixels in each row, each row of selected comparison pixels is connected with the basic pixels in the previous row, so that each row of determined sub-line segments is necessarily connected with the sub-line segments in the previous row, namely the continuity of the determined first line segments/second line segments is ensured, and furthermore, as the color value difference between the display area and the surrounding area is larger, the sub-line segments between the two comparison pixels with the largest color value difference accord with the actual situation, and the accuracy is higher.
As a preferred embodiment, the color light is set to any one of red light, green light, and blue light; the step S7 further comprises:
controlling the display module to display another set color light, collecting a first image, executing the steps S2 to S7, and repeatedly executing the steps until a third boundary corresponding to the three set color lights is obtained;
an intersection area of the three third boundary lines is determined, and a boundary line of the intersection area is taken as a target boundary line.
In this embodiment, after the target boundary line is determined, step S8 segments the first image along the target boundary line; the boundary line determining method of the embodiment can further reduce errors in determining the boundary line, and the obtained target boundary line has higher accuracy.
As shown in fig. 5, in one embodiment, there is provided a region extraction device for detecting a display module, the device including:
the first processing module is used for moving the camera to the position above the center of the display module, controlling the display module to display set color light and collecting a first image;
the second processing module is used for determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
the third processing module is used for determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is located outside the second boundary line;
the fourth processing module is used for identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
a fifth processing module for determining a first line segment at the intersection region according to the color value differences of pixels in the intersection region formed by the intersection segments;
a sixth processing module for determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
a seventh processing module, configured to use a boundary line formed by connecting the overlapping section, the first line segment, and the second line segment as a third boundary line;
and the eighth processing module is used for dividing the first image along the third boundary line, and the obtained area within the third boundary line is the display area of the display module.
The process of implementing respective functions by each module in the area extracting device for detecting a display module provided in this embodiment of the present application may refer to the description of the embodiment shown in fig. 1, and will not be repeated here.
As shown in fig. 6, in one embodiment, there is provided a display module inspection area extraction system for extracting an inspection area of a display module, the display module inspection area extraction system including:
the movable camera is used for collecting images of the display module;
and the computer equipment is connected with the movable camera and the display module and is used for executing the region extraction method for detecting the display module.
In this embodiment, the movable camera may be moved to a position above the display area of the display module under the control of the computer device, so as to collect the first image; the computer equipment is matched with the movable camera, the boundary line of the display area and the boundary line of the area outside the display area can be roughly divided according to the pixel comparison mode, then the overlapping section, the crossing section, the interval section and other sections of the two boundary lines are analyzed one by one, the analyzed layer is fine to the pixel point layer, and the finally determined boundary line has high accuracy; in addition, when the method is used for replacing products with different specifications, the resetting is not needed, the dividing areas are not needed to be redefined in the view field of the camera, and the extraction efficiency of the display area is further improved along with the detection, so that the detection efficiency of the display module is further improved.
FIG. 7 illustrates an internal block diagram of a computer device in one embodiment. As shown in fig. 7, the computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The nonvolatile storage medium of the computer device stores an operating system and may also store a computer program, where the computer program when executed by a processor may cause the processor to implement the method for extracting a region for detecting a display module provided by the embodiment of the present invention. The internal memory may also store a computer program, which when executed by the processor, causes the processor to execute the method for extracting the region for detecting the display module provided by the embodiment of the invention. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the area extracting apparatus for detecting a display module provided in the embodiment of the present invention may be implemented as a computer program, and the computer program may be executed on a computer device as shown in fig. 7. The memory of the computer device may store the program modules constituting the display module detection area extracting device, for example, the first processing module, the second processing module, the third processing module, the fourth processing module, the fifth processing module, the sixth processing module, the seventh processing module, and the eighth processing module shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the display module detection area extraction method of the respective embodiments of the present invention described in the present specification.
For example, the computer apparatus shown in fig. 7 may execute step S1 by the first processing module in the display module detection area extracting device shown in fig. 5; the computer equipment can execute the step S2 through the second processing module; the computer equipment can execute the step S3 through a third processing module; the computer equipment can execute the step S4 through a fourth processing module; the computer equipment can execute the step S5 through a fifth processing module; the computer equipment can execute the step S6 through a sixth processing module; the computer equipment can execute the step S7 through a seventh processing module; the computer device may perform step S8 through an eighth processing module.
In one embodiment, a computer device is presented, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1: the camera is moved to the position above the center of the display module, the display module is controlled to display set color light, and a first image is acquired;
s2: determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
s3: determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line;
s4: identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
s5: determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments;
s6: determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
s7: taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line;
s8: and dividing the first image along the third boundary line, wherein the obtained area within the third boundary line is the display area of the display module.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor causes the processor to perform the steps of:
s1: the camera is moved to the position above the center of the display module, the display module is controlled to display set color light, and a first image is acquired;
s2: determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
s3: determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line;
s4: identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
s5: determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments;
s6: determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
s7: taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line;
s8: and dividing the first image along the third boundary line, wherein the obtained area within the third boundary line is the display area of the display module.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (9)

1. The method for extracting the area for detecting the display module is characterized by comprising the following steps:
s1: the camera is moved to the position above the center of the display module, the display module is controlled to display set color light, and a first image is acquired;
s2: determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
s3: determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is positioned outside the second boundary line;
s4: identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
s5: determining a first line segment at the intersection region based on the color value differences of pixels in the intersection region formed by the intersection segments;
s6: determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
s7: taking a boundary line formed by connecting the overlapped section, the first line section and the second line section as a third boundary line;
s8: and dividing the first image along the third boundary line, wherein the obtained area within the third boundary line is the display area of the display module.
2. The method of claim 1, wherein the first set area is located in a middle set area of the first image, and an area of the first set area is a set area;
the determining each first pixel on the first image from the color value of the pixel in the first set area in the first image comprises:
s21: identifying a mode of color values of each pixel in the first setting area to obtain a first mode color value;
s21: determining a first tolerance from the first mode color value;
s22: pixels in the first image having color values within a first tolerance are taken as first pixels.
3. The method of claim 2, wherein there are a plurality of second setting areas, each second setting area being located in an edge setting area of the first image, and an area of each second setting area being a setting area;
the determining each second pixel on the first image from the color value of the pixel in the second set area in the first image comprises:
s31: identifying a mode of color values of each pixel in each second setting area to obtain a second mode color value;
s32: determining a second tolerance according to the second color value, wherein the second tolerance is not overlapped with the color value range of the color channel corresponding to the set color light, and the set color light is any one color light of red light, green light and blue light;
s33: and taking pixels with color values within the range of the second tolerance in the first image as second pixels.
4. A method according to claim 3, characterized in that the intersecting segment comprises a first intersecting segment belonging to a first borderline and a second intersecting segment belonging to a second borderline; the first intersecting line segment and the second intersecting line segment intersect at two intersecting points; the crossing area is an area surrounded by the first crossing line segment and the second crossing line segment, and is positioned in the first boundary line; the determining the first line segment in the intersection area according to the color value difference of the pixels in the intersection area where the intersection line segment is located comprises:
s51: taking one intersection as a starting intersection, and taking the other intersection as a terminating intersection;
s52: taking two pixels adjacent to the initial intersection point and within the intersection region as basic pixels, wherein a straight line where a dividing line segment between the two basic pixels is located passes through the initial intersection point, and the two basic pixels are located at the first row of pixels of the intersection region;
s53: taking pixels adjacent to the basic pixels in the next row of pixels in the direction of the termination intersection as comparison pixels, wherein the comparison pixels are positioned in the intersection area;
s54: calculating a color value difference between every two adjacent comparison pixels;
s55: taking a separation line segment between two adjacent comparison pixels with the largest color value difference as a first sub-line segment;
s56: and taking two adjacent comparison pixels with the largest color value difference as basic pixels, and executing the steps S53 to S56 until the color value difference comparison of each row of pixels in the intersection area is completed, so as to obtain a plurality of connected first sub-line segments, wherein each first sub-line segment forms a first line segment for connecting the initial intersection point and the termination intersection point.
5. The method of claim 4, wherein calculating a color value difference between any two adjacent comparison pixels comprises:
s541: taking a straight line where a separation line segment between two comparison pixels is located as a first straight line, and calculating the average value of color values of all pixels which are adjacent to the first straight line and are in the intersection area;
s542: the mean color difference of the two comparison pixels is calculated according to the following formula:
wherein,for mean color difference, R is the color value of the red channel of the comparison pixel, < >>For the color value corresponding to the red channel in the average value of the color values,/->For comparing the color values of the green channel of the pixel +.>For the color value corresponding to the green channel in the average value of the color values,/->For comparing the color values of the blue channel of the pixel, < >>The color value corresponding to the blue channel in the average value of the color values;
s543: calculating the color value difference between two compared pixels according to the following formula:
-/>
wherein,for colour difference>Color difference for the mean value of the first comparison pixel, < >>The second comparison pixel is the mean color difference.
6. The method of claim 5, wherein the spacer comprises a first spacer segment belonging to a first boundary line and a second spacer segment belonging to a second boundary line; the first interval line segment and the second interval line segment are spaced at two interval points, the interval area is an area surrounded by the first interval line segment and the second interval line segment, and the interval area is outside the first boundary line; the determining the second line segment in the interval region according to the color value difference of the pixels in the interval region where the interval line segment is located comprises:
s61: taking one interval point as a starting interval point, and taking the other interval point as a stopping interval point;
s62: taking two pixels adjacent to the initial interval point and within the interval area as basic pixels, wherein a straight line where a separation line segment between the two basic pixels is located passes through the initial interval point, and the two basic pixels are located at the first row of pixels of the interval area;
s63: taking pixels adjacent to the basic pixels in the next row of pixels in the direction of the termination interval point as comparison pixels, wherein the comparison pixels are positioned in the interval region;
s64: calculating a color value difference between every two adjacent comparison pixels;
s65: taking a separation line segment between two adjacent comparison pixels with the largest color value difference as a second sub-line segment;
s66: taking two adjacent comparison pixels with the largest color value difference as basic pixels, and executing the steps S63 to S66 until the color value difference comparison of each row of pixels in the interval region is completed, so as to obtain a plurality of connected second sub-line segments, wherein each second sub-line segment forms a second line segment for connecting the initial cross point and the final cross point;
wherein, for step S64, calculating the color value difference between any two adjacent comparison pixels includes:
taking the straight line where the separation line segment between the two comparison pixels is located as a second straight line, calculating the average value of the color values of all pixels adjacent to the second straight line and in the present interval region, and executing steps S542 to S543.
7. The method according to claim 1, wherein the set color light is any one of red light, green light, and blue light; the step S7 further comprises:
controlling the display module to display another set color light, collecting a first image, executing the steps S2 to S7, and repeatedly executing the steps until a third boundary corresponding to the three set color lights is obtained;
an intersection area of the three third boundary lines is determined, and a boundary line of the intersection area is taken as a target boundary line.
8. An area extraction device for detecting a display module, the device comprising:
the first processing module is used for moving the camera to the position above the center of the display module, controlling the display module to display set color light and collecting a first image;
the second processing module is used for determining each first pixel on the first image according to the color value of the pixel in the first setting area in the first image, and further determining a first boundary line surrounding all the first pixels according to the position of each first pixel;
the third processing module is used for determining each second pixel on the first image according to the color value of the pixel in the second setting area in the first image, and further determining a second boundary line according to the position of each second pixel, so that each second pixel is located outside the second boundary line;
the fourth processing module is used for identifying a superposition section, a crossing section and a spacing section of the first boundary line and the second boundary line;
a fifth processing module for determining a first line segment at the intersection region according to the color value differences of pixels in the intersection region formed by the intersection segments;
a sixth processing module for determining a second line segment in the interval region according to the color value difference of the pixels in the interval region formed by the interval segments;
a seventh processing module, configured to use a boundary line formed by connecting the overlapping section, the first line segment, and the second line segment as a third boundary line;
and the eighth processing module is used for dividing the first image along the third boundary line, and the obtained area within the third boundary line is the display area of the display module.
9. The utility model provides a display module assembly detects and uses regional extraction system which characterized in that, display module assembly detects and uses regional extraction system to be used for extracting the detection region of display module assembly, display module assembly detects and uses regional extraction system to include:
the movable camera is used for collecting images of the display module;
computer device, connected to the movable camera and the display module, for executing the display module detection area extraction method according to any one of claims 1 to 7.
CN202311771008.3A 2023-12-21 2023-12-21 Method, device and system for extracting area for detecting display module Pending CN117710346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311771008.3A CN117710346A (en) 2023-12-21 2023-12-21 Method, device and system for extracting area for detecting display module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311771008.3A CN117710346A (en) 2023-12-21 2023-12-21 Method, device and system for extracting area for detecting display module

Publications (1)

Publication Number Publication Date
CN117710346A true CN117710346A (en) 2024-03-15

Family

ID=90156753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311771008.3A Pending CN117710346A (en) 2023-12-21 2023-12-21 Method, device and system for extracting area for detecting display module

Country Status (1)

Country Link
CN (1) CN117710346A (en)

Similar Documents

Publication Publication Date Title
WO2019201035A1 (en) Method and device for identifying object node in image, terminal and computer readable storage medium
CN111307727B (en) Water body water color abnormity identification method and device based on time sequence remote sensing image
CN110716703A (en) Image processing method and device of spliced screen and spliced screen
JP4908440B2 (en) Image processing apparatus and method
WO2022143283A1 (en) Camera calibration method and apparatus, and computer device and storage medium
WO2021139197A1 (en) Image processing method and apparatus
CN114998097A (en) Image alignment method, device, computer equipment and storage medium
CN110246129B (en) Image detection method, device, computer readable storage medium and computer equipment
CN111680704B (en) Automatic and rapid extraction method and device for newly-increased human active plaque of ocean red line
CN113608805B (en) Mask prediction method, image processing method, display method and device
WO2019001164A1 (en) Optical filter concentricity measurement method and terminal device
CN115909059A (en) Natural resource sample library establishing method and device
CN114332794A (en) Target detection method, system, device and medium for train linear array image
KR101772676B1 (en) Method and device for detecting connected pixels in image
CN117710346A (en) Method, device and system for extracting area for detecting display module
CN112634259A (en) Automatic modeling and positioning method for keyboard keycaps
CN116311120A (en) Video annotation model training method, video annotation method, device and equipment
CN108564571B (en) Image area selection method and terminal equipment
CN113298755B (en) Method and device for rapidly detecting ecological environment change patch based on time sequence image
CN112766256B (en) Grating phase diagram processing method and device, electronic equipment and storage medium
CN117351859B (en) Detection method, device and system for display module
CN114913350A (en) Material duplicate checking method, device, equipment and storage medium
EP2631813A1 (en) Method and device for eliminating cracks within page
CN111445431B (en) Image segmentation method, terminal equipment and computer readable storage medium
CN110321405A (en) Model matching method, device, computer readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination