CN116797915A - Detection method, detection device and storage medium - Google Patents

Detection method, detection device and storage medium Download PDF

Info

Publication number
CN116797915A
CN116797915A CN202310118601.1A CN202310118601A CN116797915A CN 116797915 A CN116797915 A CN 116797915A CN 202310118601 A CN202310118601 A CN 202310118601A CN 116797915 A CN116797915 A CN 116797915A
Authority
CN
China
Prior art keywords
image
determining
detected
sub
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310118601.1A
Other languages
Chinese (zh)
Inventor
王晓玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202310118601.1A priority Critical patent/CN116797915A/en
Publication of CN116797915A publication Critical patent/CN116797915A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present disclosure relates to a detection method, a detection apparatus, and a storage medium, the method including: acquiring a second image of the object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image; determining a target application scene where the object to be detected is currently located according to the second image; determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different; and if the target detection parameters meet preset detection conditions, determining that the object to be detected in the target application scene is qualified in detection.

Description

Detection method, detection device and storage medium
Technical Field
The disclosure relates to the technical field of color detection, and in particular relates to a detection method, a detection device and a storage medium.
Background
The machine vision technology is a technology for performing various measurements and judgments by using a machine instead of human eyes and brains, and color detection and recognition are important applications of the machine vision technology, and different colors are recognized through a certain algorithm by means of color characteristic differences of the surfaces of objects to be detected.
At present, the application of a color detection and identification method based on a machine vision technology on a production line is still in a starting stage, and as a plurality of production links in the production line are possibly related to color detection, the actual detection environment is complex, so that the detection complexity is high, the detection error is large, and the detection requirements of a plurality of different production links in the production line are difficult to meet.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a detection method, a detection apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a detection method, including:
acquiring a second image of the object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
determining a target application scene where the object to be detected is currently located according to the second image;
determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
and if the target detection parameters meet preset detection conditions, determining that the object to be detected in the target application scene is qualified in detection.
Optionally, the determining, according to the second image, the current target application scenario of the object to be measured includes:
dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
determining a maximum gray average value and a minimum gray average value in the plurality of image areas based on the gray average values of the plurality of image areas;
determining the gray level range of the second image according to the maximum gray level average value and the minimum gray level average value;
determining a current target application scene of the object to be detected based on the gray level range of the second image; wherein, the application scenes corresponding to different gray level range are different.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a first scene, determining color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images;
determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
And determining color detection parameters of at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information.
Optionally, the color detection parameters include at least: gray average, gray range and color similarity;
the determining the color detection parameter of the at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information includes:
acquiring at least one third image from the second image; the third image is an image area where the material to be detected is located in the second image;
acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
determining the gray average value and the gray range of the material to be detected in the at least one third image based on the at least one third image;
and determining the color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image.
Optionally, the determining, based on the second image and the plurality of color template images, a color similarity between the second image and the plurality of color template images includes:
acquiring channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
normalizing the mean square error value between the second image and each of the plurality of color template images;
and respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a second scene, determining a target threshold of the second image;
Based on the target threshold, performing binarization processing on the second image to obtain a binarized image of the second image;
performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image;
acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
and determining the illumination brightness parameter of the object to be detected based on the fourth image.
Optionally, the determining the target threshold of the second image includes:
determining a gray histogram corresponding to the second image based on the gray value of each pixel point in the second image;
determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image; wherein the gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
Determining a target gray level with the maximum gray level based on gray values corresponding to at least two gray levels in the first class of gray levels;
and determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value.
Optionally, the performing contour detection on the binarized image to obtain a target image contour of the object to be measured in the binarized image and a coverage area of the target image contour in the binarized image, including:
morphological operation processing is carried out on the binarized image to obtain a processed image;
performing contour detection on the processed image to determine a plurality of contour images in the processed image;
determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
the coverage area of the target image contour within the processed image is determined.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on brightness components of pixel points in the second image;
Acquiring at least two fifth images from within the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
and determining defect detection parameters of the object to be detected in any one of the at least two fifth images based on brightness components and/or chromaticity components of a plurality of image areas of any one of the at least two fifth images.
Optionally, the defect detection parameter includes: uniformity defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
respectively determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any fifth image based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any fifth image;
Determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area;
and determining the uniformity defect parameter of the object to be detected based on the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
Optionally, the defect detection parameter includes: black spot defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into four first image areas with the same area based on the light spot center point;
Dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the first sub-image area is any sub-image area of the plurality of sub-image areas, and the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
and determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images.
Optionally, the defect detection parameter includes: black edge defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into two second image areas with the same area based on the light spot center point;
Uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
determining a second luminance difference between a third sub-image region and a fourth sub-image region of the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the light spot center point is larger than the distance between the third sub-image area and the light spot center point;
determining a third luminance difference between the third sub-image region and a fifth sub-image region based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
and determining the black edge defect parameter of the object to be detected based on a plurality of second brightness differences and a plurality of third brightness differences of a second image area in any one of the at least two fifth images.
Optionally, based on the acquired first image, acquiring a second image of the object to be measured includes:
acquiring a template image of the object to be detected, and determining the position information of the object to be detected in the first image based on the template image;
acquiring interception position information of an image interception window in the first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
based on the position offset information, adjusting the position of the image capturing window;
and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
According to a second aspect of embodiments of the present disclosure, there is provided a detection apparatus comprising:
the acquisition module is used for acquiring a second image of the object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
the first determining module is used for determining a current target application scene of the object to be detected according to the second image;
the second determining module is used for determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
And the detection module is used for determining that the object to be detected in the target application scene is qualified in detection if the target detection parameter meets a preset detection condition.
Optionally, the first determining module is configured to:
dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
determining a maximum gray average value and a minimum gray average value in the plurality of image areas based on the gray average values of the plurality of image areas;
determining the gray level range of the second image according to the maximum gray level average value and the minimum gray level average value;
determining a current target application scene of the object to be detected based on the gray level range of the second image; wherein, the application scenes corresponding to different gray level range are different.
Optionally, the second determining module is configured to:
if the target application scene is a first scene, determining color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images;
determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
And determining color detection parameters of at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information.
Optionally, the color detection parameters include at least: gray average, gray range and color similarity;
the second determining module is configured to:
acquiring at least one third image from the second image; the third image is an image area where the material to be detected is located in the second image;
acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
determining the gray average value and the gray range of the material to be detected in the at least one third image based on the at least one third image;
and determining the color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image.
Optionally, the second determining module is configured to:
acquiring channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
Determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
normalizing the mean square error value between the second image and each of the plurality of color template images;
and respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
Optionally, the second determining module is configured to:
if the target application scene is a second scene, determining a target threshold of the second image;
based on the target threshold, performing binarization processing on the second image to obtain a binarized image of the second image;
performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image;
acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
And determining the illumination brightness parameter of the object to be detected based on the fourth image.
Optionally, the second determining module is configured to:
determining a gray histogram corresponding to the second image based on the gray value of each pixel point in the second image;
determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image; wherein the gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
determining a target gray level with the maximum gray level based on gray values corresponding to at least two gray levels in the first class of gray levels;
and determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value.
Optionally, the second determining module is configured to:
morphological operation processing is carried out on the binarized image to obtain a processed image;
performing contour detection on the processed image to determine a plurality of contour images in the processed image;
determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
The coverage area of the target image contour within the processed image is determined.
Optionally, the second determining module is configured to:
if the target application scene is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on brightness components of pixel points in the second image;
acquiring at least two fifth images from within the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
and determining defect detection parameters of the object to be detected in any one of the at least two fifth images based on brightness components and/or chromaticity components of a plurality of image areas of any one of the at least two fifth images.
Optionally, the defect detection parameter includes: uniformity defect parameters;
the second determining module is configured to:
dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
respectively determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any fifth image based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any fifth image;
Determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area;
and determining the uniformity defect parameter of the object to be detected based on the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
Optionally, the defect detection parameter includes: black spot defect parameters;
the second determining module is configured to:
dividing any one of the at least two fifth images into four first image areas with the same area based on the light spot center point;
dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the first sub-image area is any sub-image area of the plurality of sub-image areas, and the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
And determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images.
Optionally, the defect detection parameter includes: black edge defect parameters;
the second determining module is configured to:
dividing any one of the at least two fifth images into two second image areas with the same area based on the light spot center point;
uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
determining a second luminance difference between a third sub-image region and a fourth sub-image region of the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the light spot center point is larger than the distance between the third sub-image area and the light spot center point;
determining a third luminance difference between the third sub-image region and a fifth sub-image region based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
And determining the black edge defect parameter of the object to be detected based on a plurality of second brightness differences and a plurality of third brightness differences of a second image area in any one of the at least two fifth images.
Optionally, the acquiring module is configured to:
acquiring a template image of the object to be detected, and determining the position information of the object to be detected in the first image based on the template image;
acquiring interception position information of an image interception window in the first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
based on the position offset information, adjusting the position of the image capturing window;
and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
According to a third aspect of embodiments of the present disclosure, there is provided a detection apparatus comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to: when executing the executable instructions stored in the memory, the steps in the detection method according to the first aspect of the embodiments of the present disclosure are implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor of a detection apparatus, causes the detection apparatus to perform the steps in the detection method according to the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
according to the embodiment of the disclosure, a second image in which an object to be detected is located is obtained in a first image, and a target application scene in which the object to be detected is currently located is determined based on the second image, so that image processing operation corresponding to the target application scene is conveniently executed on the second image, target detection parameters required by the target application scene are obtained from the second image, and whether the object to be detected in the target application scene is detected to be qualified or not is determined according to the target detection parameters and detection thresholds corresponding to the target application scene; therefore, when the object to be detected is in a plurality of different application scenes, different image processing operations can be pertinently performed on the second image containing the object to be detected, so that the detection of the object to be detected can be completed in different application scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flow chart of a color visual detection method shown in the related art.
Fig. 2 is a flow chart diagram one of a detection method according to an exemplary embodiment.
Fig. 3 is a schematic diagram of a plurality of sub-image areas of a first image area, according to an exemplary embodiment.
Fig. 4 is a second flow chart illustrating a detection method according to an exemplary embodiment.
Fig. 5 is a flow chart illustrating a discrimination of a target application scenario according to an exemplary embodiment.
Fig. 6 is a flowchart of a method for detecting an object under test in a first scenario according to an exemplary embodiment.
Fig. 7 is a schematic flow chart of color discrimination and material assembly detection of a terminal device in a first scenario according to an exemplary embodiment.
Fig. 8 is a flowchart of a method for detecting an object under test in a second scenario according to an exemplary embodiment.
Fig. 9 is a schematic flow chart of detecting brightness of a light emitting element of a terminal device in a second scenario according to an exemplary embodiment.
Fig. 10 is a flowchart of a method for detecting an object under test in a third scenario according to an exemplary embodiment.
Fig. 11 is a schematic flow chart illustrating defect detection of a light emitting element of a terminal device in a third scenario according to an exemplary embodiment.
Fig. 12 is a schematic structural view of a detection device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
In the related art, as shown in fig. 1, fig. 1 is a flow chart of a color visual detection method shown in the related art. Collecting a test image of an object to be tested, and intercepting a test image area where the object to be tested is located from the test image; respectively determining a color histogram of the test image area and color histograms of a plurality of color template images; and determining the color similarity between the test image area and the plurality of color template images according to the color histogram of the test image area and the color histograms of the plurality of color template images, so as to determine the color information of the test image area based on whether the color similarity meets the threshold requirement.
It should be noted that, since the color histogram only counts the colors in the image, and ignores the position information corresponding to different colors in the image, the color visual detection method is only applicable to detection scenes with larger color differences, and has lower detection accuracy.
Moreover, the application of the color visual detection method on the production line is limited to the identification of the color, and the production links related to the color detection in the production line are more, so that the color visual detection method is difficult to adapt to the detection requirements of a plurality of different production links.
An embodiment of the present disclosure provides a detection method, as shown in fig. 2, and fig. 2 is a schematic flow diagram of a detection method according to an exemplary embodiment. The method comprises the following steps:
step S101, acquiring a second image of an object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
step S102, determining a current target application scene of the object to be detected according to the second image;
step S103, determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
Step S104, if the target detection parameter meets a preset detection condition, determining that the object to be detected in the target application scene is qualified in detection.
In step S101, a first image may be acquired by using an image acquisition component, an image area where the object to be measured is located is determined from the first image, and the image area is cut from the first image, so as to obtain a second image.
It should be noted that, on the production line, a first image of the current production link may be acquired by using an industrial camera, and a second image of the object to be detected is captured from the first image; so as to detect the object to be detected by using the second image.
It will be appreciated that, on a production line, since the position of the processing apparatus is generally fixed, the object to be measured is moved to the processing position corresponding to the processing apparatus for processing; therefore, an industrial camera can be arranged at a processing position to acquire first images to be detected, the positions of the objects to be detected in the first images are relatively fixed, the position information of the objects to be detected in the first images can be acquired, an image intercepting window is arranged based on the position information, and each acquired first image is intercepted by utilizing the image intercepting window to acquire a second image where the objects to be detected are located.
In step S102, image analysis is performed on the second image, and based on the image analysis result, a target application scenario where the object to be measured is currently located is determined.
It should be noted that, since the production line may include a plurality of production links related to color detection, and in different production links, specific application scenarios of the color detection technology may also be different, for example, the production line of the terminal device may be upstream, and at least includes a packaging segment link, an assembling segment link and a whole measuring segment link;
in the packing section link, the color of the terminal equipment needs to be judged so as to automatically call the corresponding color configuration file; detecting whether the color of a part (such as a key, a card holder and the like) on the terminal equipment is the same as the color of the part in a color configuration file of the terminal equipment or not, so as to determine whether the problem of mixed loading of the parts or not occurs; in the whole measuring section, it is necessary to detect whether the brightness of a light emitting component (such as a flash, a power source lamp or a front light supplement lamp) in the terminal device reaches the standard or not.
It can be understood that, because the production operations performed on the object to be measured in different production links are different, the placement position and the position to be processed of the object to be measured in different production links may be different, so that the current target application scene of the object to be measured can be determined based on the gesture of the object to be measured in the second image and the background environment information in the second image.
Because the items detected by the object to be detected may be different in different target application scenarios, the detection parameters required to be acquired from the second image are also different, and the image processing mode executed on the second image is also different. Therefore, after the second image of the object to be detected is obtained, the current target application scene of the object to be detected can be determined based on the second image, so that corresponding image processing can be performed on the second image in the subsequent processing process, and target detection parameters required by the target application scene can be obtained.
In step S103, after determining the current target application scenario of the object to be tested, the corresponding image of the second image may be processed based on the detection item corresponding to the target application scenario, so as to obtain the target detection parameter of the object to be tested in the target application scenario.
It should be noted that, detection items corresponding to different application scenes are different, and image processing operations corresponding to different application scenes are also different, so that, based on the image processing operations corresponding to different application scenes, the obtained target detection parameters are also different after the image processing is performed on the object to be detected.
Because the production line includes a plurality of production links related to color detection, and in different production links, specific application scenarios of the color detection technology may be different, and target detection parameters of an object to be detected obtained by detecting a second image may also be different, in order to enable the detection method shown in the embodiment of the present disclosure to be suitable for a plurality of different production links related to color detection in the production line, after determining a target application scenario where the object to be detected is currently located, the embodiment of the present disclosure may perform an image processing operation corresponding to the target application scenario on the second image, so that target detection parameters required by the target application scenario may be obtained from the second image, so as to complete a detection item corresponding to the target application scenario for the object to be detected in the target application scenario.
In step S104, after obtaining the target detection parameter of the object to be detected in the target application scene, a detection threshold corresponding to the target application scene may be obtained, the target detection parameter is compared with the detection threshold, and according to the comparison result, it is determined whether the object to be detected meets the preset detection condition, thereby determining whether the object to be detected is qualified.
Here, the detection threshold may be a preset default value; it can be appreciated that, since the target detection parameters of the object to be detected under different application scenarios may be different, the detection threshold values corresponding to the different application scenarios may also be different.
If the target detection parameter of the object to be detected is greater than or equal to the detection threshold corresponding to the target application scene, determining that the object to be detected meets a preset detection condition, wherein the object to be detected in the target application scene is detected to be qualified; if the target detection parameter of the object to be detected is smaller than the detection threshold corresponding to the target application scene, determining that the object to be detected does not meet the preset detection condition, and detecting the object to be detected in the target application scene to be unqualified.
According to the embodiment of the disclosure, a second image in which an object to be detected is located is obtained in a first image, and a target application scene in which the object to be detected is currently located is determined based on the second image, so that image processing operation corresponding to the target application scene is conveniently executed on the second image, target detection parameters required by the target application scene are obtained from the second image, and whether the object to be detected in the target application scene is detected to be qualified or not is determined according to the target detection parameters and detection thresholds corresponding to the target application scene; therefore, when the object to be detected is in a plurality of different application scenes, different image processing operations can be pertinently performed on the second image containing the object to be detected, so that the detection of the object to be detected can be completed in different application scenes.
Optionally, the determining, according to the second image, the current target application scenario of the object to be measured includes:
dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
determining a maximum gray average value and a minimum gray average value in the plurality of image areas based on the gray average values of the plurality of image areas;
determining the gray level range of the second image according to the maximum gray level average value and the minimum gray level average value;
determining a current target application scene of the object to be detected based on the gray level range of the second image; wherein, the application scenes corresponding to different gray level range are different.
In an embodiment of the disclosure, after the second image is acquired, the second image may be divided into a plurality of image areas; and determining a gray average value of each image region based on gray values of pixel points in each of the plurality of image regions.
Here, the plurality of image areas do not overlap with each other. The image dividing method for dividing the second image into the plurality of image areas may be selected according to actual situations, which is not limited in the embodiments of the present disclosure.
For example, the second image may be traversed using a sliding window of a preset size to obtain a plurality of image areas of the second image; here, the size of the sliding window may be set by the user as required, for example, may be 3*3, 5*5 or another size window.
After the gray average value of the plurality of image areas of the second image is obtained, the maximum gray average value and the minimum gray average value in the plurality of image areas can be determined, and the gray extremely bad of the second image is determined based on the difference value between the maximum gray average value and the minimum gray average value.
It will be appreciated that the grayscale range may be used to describe the range of changes in the intensity of an image, reflecting the intensity of the image.
And determining the gray level range of the second image based on the gray level range of the second image, so as to determine the current target application scene of the object to be detected.
It should be noted that, each application scene corresponds to a gray level range, and the gray level ranges corresponding to different application scenes are not overlapped with each other; the gray level range of the second image can be determined according to the gray level range of the second image, and an application scene corresponding to the gray level range of the second image is determined as the target application scene of the object to be detected.
In some embodiments, the plurality of gray level range ranges may include: the gray scale range is smaller than any gray scale range in the second range, and any gray scale range in the second range is smaller than any gray scale range in the third range.
The determining, based on the gray level range of the second image, the current target application scene of the object to be measured includes:
if the gray level range of the second image is within the first range, determining a current target application scene of the object to be detected as a first scene;
if the gray level range of the second image is within the second range, determining that the current target application scene of the object to be detected is a third scene;
and if the gray level range of the second image is within the third range, determining the current target application scene of the object to be detected as a second scene.
It should be noted that, since any gray level difference in the first range is smaller than any gray level difference in the second range, any gray level difference in the second range is smaller than any gray level difference in the third range.
If the gray level range of the second image is within the first range, the brightness change of the second image is weak, namely the brightness distribution of the second image is uniform; therefore, the current target application scene of the object to be detected can be determined to be a first scene; it should be noted that, the first scene is a scene of color identification of the object to be measured and detection of the material to be measured in the object to be measured.
If the gray level range of the second image is within the second level range, the brightness change of the second image is obvious, namely the brightness distribution of the second image is uneven; therefore, the current target application scene of the object to be detected can be determined to be a third scene; the third scene is a scene of defect detection of the light emitting element in the object to be measured.
If the gray level range of the second image is in the third range, the brightness change of the second image is severe, so that the current target application scene of the object to be detected can be determined to be the second scene; the second scene is a scene of detecting the luminance of the light emitting element in the object to be measured.
According to the embodiment of the disclosure, the second image is divided into a plurality of image areas, the gray level range of the second image is determined according to the gray level average value of the plurality of image areas, the target application scene where the object to be detected is located is determined according to the gray level range of the second image, and the brightness change condition of the second image can be reflected due to the gray level range of the second image, so that the target application scene matched with the brightness change condition of the second image is determined based on the brightness change condition of the second image and the brightness change condition of different application scenes, and therefore in a subsequent processing process, image processing operation corresponding to the target application scene can be executed on the second image, and target detection parameters required by the target application scene are acquired.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a first scene, determining color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images;
determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
and determining color detection parameters of at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information.
In the embodiment of the disclosure, when the target application scene where the object to be detected is located is determined to be a first scene, a plurality of color template images can be obtained;
here, the plurality of color template images respectively indicate different color information of a plurality of terminal devices processed in the production line;
it can be understood that, since the same production line can process a plurality of objects to be measured with different colors at the same time, but the colors of the materials to be mounted of the objects to be measured with different colors may also be different, the colors of the objects to be measured need to be determined before the colors of the materials to be measured of the objects to be measured are detected.
Determining a color similarity between the second image and each of the plurality of color template images based on the second image and the plurality of color template images;
here, the color features of the second image and each of the color template images may be extracted respectively using an image feature extraction model, and a vector distance between the color features of the second image and the color features of each of the color template images may be determined, thereby determining a color similarity between the second image and each of the color template images based on the vector distance between the color features of the second image and the color features of each of the color template images; and determining the color parameter of the color template image with the maximum color similarity as the color parameter of the object to be detected.
It can be understood that, if the vector distance between the color feature of the second image and the color feature of the color template image is larger, the color similarity between the second image and the color template image is smaller, so that the difference between the color parameter of the object to be detected in the second image and the color parameter indicated by the color template image is larger; and if the vector distance between the color features of the second image and the color features of the color template image is smaller, the color similarity between the second image and the color template image is larger, and the color parameters of the object to be detected in the second image are more similar to the color parameters indicated by the color template image.
It should be noted that, the color similarity between the second image and the color template image may also be determined by other manners, which is not limited by the embodiments of the present disclosure.
After the color parameters of the object to be detected are determined, determining target color configuration information matched with the color parameters based on the color parameters;
it is understood that a plurality of color configuration information may be stored in advance, and the color configuration information may include: the method comprises the steps of setting a tone range of a material to be installed in terminal equipment, a template image of the material to be installed and/or an installation position of the material to be installed on the terminal equipment. And, the color configuration information of the different color terminal devices may be different.
After the target color configuration information of the object to be measured is obtained, the object to be measured of the object to be measured is positioned in the second image based on a template image of the object to be measured in the target color configuration information and/or the installation position of the object to be measured on the terminal device, the gray value of the object to be measured is determined, and the color detection parameter of the object to be measured is determined based on the gray value of the object to be measured and the tone range of the object to be measured in the target color configuration information.
It should be noted that the object to be measured may include a plurality of materials to be measured, and the gray value of each material to be measured may be determined respectively, and the color detection parameter of each material to be measured may be determined based on the gray value of each material to be measured and the tone range of the material to be installed, which is matched with the material to be measured, in the target color configuration information; comparing the color detection parameters of each material to be detected with a preset detection threshold value according to the color detection parameters of each material to be detected of the object to be detected, and determining that the object to be detected is qualified if the comparison result indicates that the color detection parameters of each material to be detected are greater than or equal to the preset detection threshold value;
if the comparison result indicates that at least one color detection parameter of the materials to be detected exists in the materials to be detected is smaller than a preset detection threshold, the object to be detected can be determined to be unqualified in detection.
After determining that a target application scene where the object to be detected is located is a first scene, determining color similarity between a second image where the object to be detected is located and a plurality of color template images, and accordingly determining color parameters of the object to be detected in the second image according to the color similarity between the second image and each color template image, and obtaining target color configuration information matched with the color parameters; and respectively determining color detection parameters of a plurality of materials to be detected based on the target color configuration information and gray values of the plurality of materials to be detected of the object to be detected in the second image, so as to detect whether the assembly of the plurality of materials to be detected in the object to be detected is qualified or not based on the color detection parameters of the plurality of materials to be detected.
Optionally, the color detection parameters include at least: gray average, gray range and color similarity;
the determining the color detection parameter of the at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information includes:
acquiring at least one third image from the second image; the third image is an image area where the material to be detected is located in the second image;
acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
determining the gray average value and the gray range of the material to be detected in the at least one third image based on the at least one third image;
and determining the color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image.
In an embodiment of the disclosure, the color detection parameters include at least: gray average, gray range and color similarity;
It should be noted that the gray average value may be used to reflect the brightness of the whole image; the larger the gray average value is, the brighter the image is, and the smaller the gray average value is, the darker the image is.
The color similarity can be the color similarity between the material to be detected and the tone range corresponding to the material to be detected in the target color configuration information.
And contour detection can be performed on the second image based on contour information of a plurality of objects to be detected so as to determine an image area where each material to be detected is located in the second image, and a third image corresponding to each material to be detected is obtained by performing interception processing on the image area where each material to be detected is located in the second image.
Determining a gray average value of the material to be detected in the at least one third image according to the gray value of each pixel point in the at least one third image;
dividing the at least one third image into a plurality of non-overlapping image areas, and determining the gray average value of the plurality of image areas; the gray level range of the at least one third image is determined based on the maximum gray level mean and the minimum gray level mean in the plurality of image areas.
The target color image of each material to be detected in the object to be detected can be obtained based on the target color configuration information of the object to be detected; and determining the color similarity between the at least one third image and the target color image based on the at least one third image and the target color image corresponding to the material to be detected in the third image.
It is understood that the target color image of the material to be measured may be used to describe at least a range of hues of the material to be measured.
After the gray average value, the gray range and the color similarity of the material to be detected are obtained, the gray average value can be compared with a preset gray threshold value, and a first comparison result is obtained; comparing the gray level range with a preset range threshold to obtain a second comparison result; and comparing the color similarity with a color similarity threshold value to obtain a third comparison result.
And if at least two comparison results in the first comparison result, the second comparison result and the third comparison result indicate that the detection parameter is greater than or equal to the detection threshold value, determining that a plurality of materials to be detected of the object to be detected are qualified in assembly.
If at least two comparison results among the first comparison result, the second comparison result and the third comparison result indicate that the detection parameter is smaller than the detection threshold value, the problem that the assembly of a plurality of materials to be tested of the object to be tested is unqualified, namely the assembly of the materials to be tested is missed or the assembly of the colors of the materials to be tested is wrong, and the like is solved.
According to the embodiment of the disclosure, after the target color configuration information of the object to be tested is obtained, a third image in which at least one material to be tested of the object to be tested is located is obtained from the second image, the gray average value, the gray range and the color similarity of the material to be tested in the third image are determined based on the third image and the target color image corresponding to the material to be tested, and whether the assembly of a plurality of materials to be tested of the object to be tested is qualified or not is determined jointly according to the gray average value, the gray range and the color similarity of the material to be tested in the third image, so that automatic detection of problems such as missed mounting of the material to be tested or color assembly errors of the material to be tested in the object to be tested is facilitated.
Optionally, the determining, based on the second image and the plurality of color template images, a color similarity between the second image and the plurality of color template images includes:
acquiring channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
Normalizing the mean square error value between the second image and each of the plurality of color template images;
and respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
In the embodiment of the disclosure, the pixel value of each pixel in the second image and the plurality of color template images may be obtained and expressed as a channel gray component of the pixel on three channels of the color model;
channel gray components of each pixel in the second image on three channels of the color model can be acquired by acquiring image information of the second image based on the image information of the second image.
And determining the channel gray scale average value of the second image on any channel of the three channels according to the channel gray scale component of each pixel point in the second image.
The method comprises the steps that image information of any one color template image in a plurality of color template images can be respectively obtained, and channel gray components of each pixel point in the any one color template image on three channels of the color model are obtained based on the image information of the any one color template image;
And determining the channel gray level average value of the any color template image on any channel according to the channel gray level component of any channel of each pixel point in the any color template image in the three channels.
Here, the color model includes a red-green-blue (Red, green, blue, RGB) color model, and channel gray components on three channels of the RGB color model are an R-channel gray component, a G-channel gray component, and a B-channel gray component, respectively.
The channel gray component, i.e., the R-channel gray component, the G-channel gray component, and the B-channel gray component, of each pixel point in the second image (or any color template image) on three channels of the color model may be acquired based on the image information of the second image (or any color template image) by acquiring the image information of the second image (or any color template image).
Determining an R-channel gray scale mean value of the second image (or any color template image) based on the R-channel gray scale component of each pixel point in the second image (or any color template image);
determining a G-channel gray scale mean value of the second image (or any color template image) based on the G-channel gray scale component of each pixel point in the second image (or any color template image);
A B-channel gray scale mean value of the second image (or any color template image) is determined based on the B-channel gray scale component of each pixel in the second image (or any color template image).
Determining a mean square error value between the second image and each of the color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
it should be noted that, the mean square error value between the second image and the color template image is an expected value for reflecting the degree of difference between the gray-scale average value of the three channels of the second image and the gray-scale average value of the three channels of the color template image; and if the mean square error value between the second image and the color template image is smaller, the difference between the gray average value of the three channels of the second image and the gray average value of the three channels of the color template image is smaller.
And respectively determining the color similarity between the second image and each color template image in the plurality of color template images by carrying out normalization processing on the mean square error value between the second image and each color template image and based on the normalized plurality of mean square error values.
Here, the color similarity between the second image and each of the plurality of color template images may be determined according to:
S=1-MSEk;
wherein S is a color similarity, and MSE is a mean square error value between the second image and the color template image; and k is a normalized coefficient, and k=3×255.
It may be appreciated that, in the embodiment of the present disclosure, the mean square error value of the second image and each color template image is determined directly by using the channel gray average value of the second image and each color template image on three channels of the color model, so that the color similarity between the second image and each color template image in the plurality of color template images is determined according to the mean square error value capable of reflecting the degree of difference between the three channel gray average values of the second image and the three channel gray average values of the color template images, thereby reducing the computational complexity and improving the computational efficiency of the color similarity.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a second scene, determining a target threshold of the second image;
Based on the target threshold, performing binarization processing on the second image to obtain a binarized image of the second image;
performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image;
acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
and determining the illumination brightness parameter of the object to be detected based on the fourth image.
In the embodiment of the disclosure, when it is determined that the target application scene where the object to be measured is located is the second scene, a target threshold of the second image may be obtained.
Here, the target threshold may be a default threshold set in advance, or the target threshold may be determined according to the illumination brightness when the light emitting element in the object to be measured emits light normally.
It can be understood that, because the current target application scene of the object to be measured is the second scene, the second scene is the scene of brightness detection of the light emitting element in the object to be measured. It can be understood that the second image is an image acquired when the light emitting element in the object to be measured is in a light emitting state, so that the image brightness of an image area of the second image corresponding to the irradiation area of the light emitting element is larger, and the image brightness of other image areas in the second image is smaller, that is, the brightness change in the second image is more intense.
The second image may be binarized based on the target threshold to obtain a binarized image.
Here, the gray value of each pixel of the second image may be obtained, the gray value of each pixel is compared with the target threshold, and if the comparison result indicates that the gray value of the pixel is greater than or equal to the target threshold, the gray value of the pixel is updated to be the first gray value; if the comparison result indicates that the gray value of the pixel point is smaller than the target threshold value, updating the gray value of the pixel point to the second gray value; thereby constructing the binarized image based on the updated gray value of each pixel.
Wherein the first gray value is greater than the second gray value; here, specific values of the first gray level value and the second gray level value may be set according to actual requirements, which is not limited in the embodiments of the present disclosure; for example, the first gray value may be 255 and the second gray value may be 0.
It can be understood that the image area formed by surrounding the plurality of pixel points of the first gray value in the binarized image is the irradiation area of the light emitting element.
The outline detection can be carried out on the binarized image, and the outline of the target image corresponding to the object to be detected is determined based on the outline detection result; and determining the coverage area of the target image contour in the binarization graph based on the target image contour of the object to be detected.
Here, contour detection may be performed on the binarized image based on a contour detection algorithm; the specific contour detection algorithm can be selected according to actual requirements, for example, the contour detection algorithm can be a Sobel algorithm, a Canny algorithm and the like.
The target image contour of the object to be measured at least comprises pixel positions of all pixel points at the contour line in the binarized image.
And acquiring pixel positions of all pixel points at a contour line according to the target image contour, and determining a coverage area of the target image contour in the binarized image based on an image area formed by surrounding the pixel positions of all pixel points at the contour line.
Based on the coverage area, an image area which coincides with the coverage area is intercepted from the second image, the intercepted image area is determined to be a fourth image, and the illumination brightness parameter of the object to be detected is determined according to each pixel point in the fourth image.
The average brightness of the fourth image can be determined based on the brightness components of each pixel point in the fourth image, and the average brightness of the fourth image is determined as the illumination brightness parameter of the object to be measured.
After the illumination brightness parameter (namely the average brightness) of the object to be detected is determined, comparing the average brightness with a preset detection threshold, and if the average brightness is greater than or equal to the detection threshold, determining that the illumination brightness of the light-emitting element in the object to be detected is qualified; and if the average brightness is smaller than the detection threshold, determining that the illumination brightness of the light-emitting element in the object to be detected is unqualified.
In some embodiments, a channel gray component of each pixel in the fourth image on three channels of the color model may be obtained; determining a channel gray average value of a fourth image on any channel according to the channel gray component of each pixel point in the fourth image in any channel in the three channels; and determining the illumination brightness parameter of the object to be detected based on the gray average value of the three channels of the fourth image.
It can be understood that the gray average value of the three channels of the fourth image can be determined as the illumination brightness parameter of the object to be detected; comparing the gray average value of the three channels of the fourth image with preset three detection thresholds respectively, and if the comparison result indicates that the gray average value of the three channels is larger than or equal to the detection thresholds, determining that the illumination brightness of the light-emitting element in the object to be detected is qualified; if the comparison result indicates that at least one channel gray average value in the three channel gray average values is smaller than the detection threshold value, determining that the illumination brightness of the light-emitting element in the object to be detected is unqualified.
After determining that a target application scene where the object to be detected is located is a second scene, determining a target threshold of the second image, and processing the second image into a binarized image based on the target threshold; and (3) performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image, so that a fourth image (namely a facula image) overlapped with the coverage area is obtained and determined from the second image according to the coverage area, and the illumination brightness parameters of the light emitting element in the object to be detected are determined according to the pixel points of all pixel points in the fourth image, so that automatic detection of the illumination brightness of the light emitting element in the object to be detected is realized.
Optionally, the determining the target threshold of the second image includes:
determining a gray histogram corresponding to the second image based on the gray value of each pixel point in the second image;
determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image; wherein the gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
Determining a target gray level with the maximum gray level based on gray values corresponding to at least two gray levels in the first class of gray levels;
and determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value.
In the embodiment of the disclosure, a gray value of each pixel point in the second image may be obtained; and determining a gray level histogram corresponding to the second image based on the gray level value of each pixel point in the gray level image of the second image.
Here, the second image may be converted into a gray image so as to be based on a gray value of each pixel point within the gray image;
it can be understood that, in the second scenario, the light emitting element in the object to be measured is in a light emitting state, and the second image is an image acquired when the light emitting element in the object to be measured is in the light emitting state; the gray scale of the pixel point in the second image can reflect the illumination brightness of the light emitting element to some extent.
In some embodiments, the gray level image of the second image may be subjected to a mean value filtering process, and a gray level histogram corresponding to the second image may be determined based on the gray level image after the mean value filtering process.
The mean value filtering processing refers to taking the gray average value of the local image information of the pixel to be processed in the gray image as the gray value of the pixel to be processed after processing; the average filtering process is also called neighborhood average filtering process; each element of the filter convolution kernel of the mean value filter processing is the same; it can be understood that the average value filtering processing is performed on the gray level image of the second image, so that the gray level image can be noise reduced, and the average value filtering processing process is simpler and the processing speed is faster, but the noise reduction mode can reduce noise and simultaneously cause the gray level to be blurred.
The gray level histogram of the pixel points in the second image at least indicates the ratio of the number of the pixel points with different gray levels in the second image to the total number of the pixel points in the second image; it will be appreciated that the gray level histogram can reflect the frequency of occurrence of a plurality of different gray levels within the second image;
determining frequencies corresponding to a plurality of different gray levels in the second image based on the gray level histogram of the pixel points in the second image; dividing a plurality of gray levels in the gray histogram into a first class of gray levels and a second class of gray levels based on frequencies corresponding to the plurality of different gray levels;
Wherein, the frequency corresponding to any gray level in the first class of gray level is larger than the frequency corresponding to any gray level in the second class of gray level;
here, the first class of gray levels includes at least two different gray levels therein;
and determining a gray level with the maximum gray level in the first class gray level as a target gray level according to the gray values corresponding to at least two gray levels in the first class gray level, and determining the target threshold of the second image based on the difference value between the gray value corresponding to the target gray level and a preset threshold.
Here, the preset threshold may be set according to an actual image, for example, the preset threshold may be 20.
It may be appreciated that, since the object to be processed on the production line may include a plurality of light emitting elements, and the brightness of the plurality of light emitting elements may be different, the embodiment of the disclosure determines, based on the gray histogram of the second image, the target gray level with the largest gray value from the first class gray levels with larger frequency in the gray histogram, and determines the target threshold of the second image according to the difference between the gray value corresponding to the target gray level and the preset threshold, so as to determine the target threshold according to the gray distribution condition capable of reflecting the actual light emitting condition of the light emitting elements of the object to be detected in the second image, so that the target thresholds corresponding to different light emitting elements of the object to be detected may be different, and in the subsequent processing process, the second image including different light emitting elements of the object to be detected may be subjected to binarization processing in a targeted manner, thereby obtaining a more accurate binarized image.
Optionally, the performing contour detection on the binarized image to obtain a target image contour of the object to be measured in the binarized image and a coverage area of the target image contour in the binarized image, including:
morphological operation processing is carried out on the binarized image to obtain a processed image;
performing contour detection on the processed image to determine a plurality of contour images in the processed image;
determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
the coverage area of the target image contour within the processed image is determined.
In the embodiment of the disclosure, after obtaining the binarized image, in consideration of that some noise points may exist in the binarized image, in order to further remove isolated noise points in the binarized image, morphological processing may be performed on the binarized image to obtain a processed image.
It should be noted that the basic theory of morphological processing is mathematical morphology; the mathematical basis and the language of mathematical morphology are set theory, namely, a set is used for describing an image target, describing the relation among various parts of the image and describing the structural characteristics of the target. In morphology, reflection and translation of a collection are widely used to express a Structural Element (SE) based operation: a small set or sub-image of a feature of interest in an image is studied. The basic operation includes: corrosion, expansion, open and closed operations, skeleton extraction, extreme corrosion, hit-miss transformation, morphological gradients, etc. Morphology, a high-level morphology, is often based on both basic operations of corrosion and expansion.
Here, the morphological processing is an on operation processing. The open operation is performed by etching and then expanding. The open operation process generally smoothes the contour of the object, breaks the narrower neck and eliminates the thin protrusions; corrosion reduces or refines objects in the binary image; corrosion can be seen as a morphological filtering operation.
After the processed image is obtained, a plurality of contour images in the processed image are obtained by carrying out contour detection on the processed image; acquiring the contour area of each contour image in the plurality of contour images, and determining the contour image with the largest contour area as a target contour image based on the contour areas of the plurality of contour images;
here, the specific contour detection algorithm may be selected according to actual requirements, and for example, the contour detection algorithm may be a Sobel algorithm, a Canny algorithm, or the like.
Since the target image contour of the object to be measured at least comprises pixel positions of all pixel points at the contour line in the processing image. And acquiring pixel positions of all pixel points at the contour line according to the target image contour, and determining a coverage area of the target image contour in the processed image based on an image area formed by surrounding the pixel positions of all pixel points at the contour line.
According to the embodiment of the disclosure, through morphological open operation processing is performed on the binarized image, isolated noise points in the binarized image are eliminated, a denoised processing image is obtained, contour detection is performed on the denoised processing image, a target contour image with the largest contour area in the processing image is determined, and based on the target contour image, the coverage area of the target contour image in the processing image is determined, so that the luminous area of the luminous element of the object to be detected in the second image is determined according to the coverage area of the target contour image, and therefore the illumination brightness parameter of the luminous element is determined based on the pixel value of each pixel point in the luminous area.
Optionally, the determining, based on the second image and the target application scene, a target detection parameter of the object to be detected includes:
if the target application scene is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on brightness components of pixel points in the second image;
acquiring at least two fifth images from within the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
And determining defect detection parameters of the object to be detected in any one of the at least two fifth images based on brightness components and/or chromaticity components of a plurality of image areas of any one of the at least two fifth images.
In the embodiment of the present disclosure, when it is determined that the target application scene is a third scene, a brightness component of each pixel point in the second image may be obtained;
here, the second image may be converted into a second image of a chromaticity, saturation, brightness (Hue Saturation Value, HSV) type, and a brightness component of each pixel point within the second image may be acquired based on the second image of the HSV type.
Here, HSV is an intuitive color model, which is a color space created according to the intuitive nature of colors, also called a hexagonal pyramid model, in which the parameters of the colors are hue (H), saturation (S) and brightness (V), respectively, and may also be three channels.
The brightness component of each pixel point within the V-channel image may be obtained by extracting the V-channel image of the HSV-type second image.
And determining the central area of a facula image formed by the light emitting element of the object to be detected in the second image based on the brightness component of each pixel point in the second image, and determining the facula central point of the facula image based on the central area of the facula image.
Here, the second image may be divided into a plurality of image areas, and the average brightness of each of the plurality of image areas may be determined based on the brightness component of the pixel point of each of the plurality of image areas; and determining an image area with the maximum average brightness as a central area of the facula image according to the average brightness of the image areas, and determining a central point of the central area as a facula central point of the facula image.
It should be noted that, the image size of the second image may be acquired, and the size of the image area is set based on the image size, so that the second image is subjected to the equal-ratio image division.
It can be appreciated that in the related art, by directly determining the image center point in the second image as the spot center point of the spot image, the position of the object to be measured on the production line may not be completely fixed in consideration of the second image acquisition, so that the light emitting element of the object to be measured is not centrally distributed in the spot image formed by the second image, and thus the image center point in the second image is not the spot center point of the spot image.
Therefore, when determining the spot center point of the spot image, the embodiments of the present disclosure may determine the center area of the spot image in the second image based on the brightness component of each pixel point in the second image, so as to determine the spot center point based on the center area of the spot image.
After the spot center point is determined, a fifth image of at least two different Field of View (FOV) angles may be acquired from the second image based on the spot center point as a center.
Here, the angles of view corresponding to the at least two second images may be set according to actual needs, which is not limited in the embodiments of the present disclosure. For example, two fifth images may be acquired from the second image, wherein the field angles of the two fifth images are 0.7FOV and 1.0FOV, respectively.
Dividing at least two fifth images into a plurality of image areas, respectively acquiring brightness components and/or chromaticity components of the plurality of image areas in each fifth image, and respectively determining defect detection parameters of the object to be detected in each fifth image based on the brightness components and/or chromaticity components of the plurality of image areas in the fifth image.
Here, an image size of each fifth image may be acquired, and a size of an image area is set based on the image size, thereby performing an equal-ratio image division on the fifth image; the size of the image area in the fifth image is smaller than the size of the image area in the second image.
After determining that a target application scene where the object to be detected is located is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on the second image; taking the center point of the light spot as the center, and intercepting a fifth image with a plurality of different angles of view from the second image; because the image center of the fifth image is the light spot center point, and the brightness distribution in the light spot image shows a trend of uniformly attenuating from the light spot center point to the periphery, the variation trend of the brightness component and/or the chromaticity component among the plurality of image areas is determined according to the brightness component and/or the chromaticity component of the plurality of image areas in each fifth image, so that the defect detection parameters of the light emitting element of the object to be detected in any fifth image are determined.
Optionally, the defect detection parameter includes: uniformity defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
Dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
respectively determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any fifth image based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any fifth image;
determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area;
and determining the uniformity defect parameter of the object to be detected based on the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
In an embodiment of the disclosure, the defect detection parameter may include: uniformity defect parameters;
it should be noted that, since the light emitting area of the light emitting element generally has a tendency to uniformly diverge from the central light emitting point toward the periphery, the brightness component and/or the chromaticity component of the corner area of the flare image are smaller than those of the central area of the flare image, and the brightness component and/or the chromaticity component of the corner areas of the flare image should be the same or similar.
The uniformity defect parameter of the fifth image is at least used for describing brightness component differences and/or chromaticity component differences between a plurality of corner areas of the facula image and a central area of the facula image in the fifth image.
After the at least two fifth images are obtained, the at least two fifth images may be respectively divided into a plurality of image areas, and a center image area and a plurality of corner image areas of each fifth image may be determined from the plurality of image areas.
Here, the distances between the center points of any two corner image areas among the plurality of corner image areas and the center point of the center image area are the same.
Here, the image area dividing method may be set according to actual requirements, which is not limited in the embodiment of the present disclosure. For example, each fifth image may be divided into 3*3 image areas of the same area; the image areas of the fifth image include 1 center image area and 4 corner image areas.
It should be noted that, since at least two fifth images are images with different view angles cut from the second image with the spot center point as the center, the center image area of the fifth image is the center area of the spot image;
based on the above, the embodiment of the disclosure determines average luminance and average chromaticity of the center image area based on the luminance component and the chromaticity component of the pixel point in the center image area by acquiring the luminance component and the chromaticity component of the pixel point in the center image area and the corner image area of any fifth image; and respectively determining the average brightness and average chromaticity of each corner image area based on the brightness component and the chromaticity component of the pixel point in each corner image area.
Respectively determining the ratio of the average brightness and the average chromaticity of each corner image area in the plurality of corner image areas to the average brightness and the average chromaticity of the central image area; determining the brightness uniformity of each corner image area based on the ratio between the average brightness; and determining the chromaticity uniformity of each corner image area based on the ratio between the average chromaticity.
Determining maximum brightness uniformity and minimum brightness uniformity according to brightness uniformity of a plurality of corner image areas, and determining that the brightness uniformity of an object to be detected in a fifth image is extremely poor based on a difference value between the maximum brightness uniformity and the minimum brightness uniformity;
it can be understood that the extremely poor brightness uniformity of the object to be measured can be used to describe the difference of brightness uniformity among the plurality of corner image areas in the fifth image, if the extremely poor brightness uniformity of the object to be measured is large, it is indicated that at least two corner image areas with large brightness difference exist in the plurality of corner image areas in the fifth image, that is, the light-emitting element may have brightness uniformity defects. And if the brightness uniformity of the object to be detected is extremely small, indicating that the brightness of the corner image areas of the fifth image is the same or similar.
And determining the maximum chromaticity uniformity and the minimum chromaticity uniformity according to the chromaticity uniformity of the corner image areas, and determining that the chromaticity uniformity of the object to be detected in the fifth image is extremely poor based on the difference value between the maximum chromaticity uniformity and the minimum chromaticity uniformity.
It can be understood that the extremely poor chromaticity uniformity of the object to be measured can be used to describe the difference of chromaticity uniformity among the plurality of corner image areas in the fifth image, if the extremely poor chromaticity uniformity of the object to be measured is large, it is indicated that at least two corner image areas with large chromaticity difference exist in the plurality of corner image areas in the fifth image, and the light-emitting element may have a chromaticity uniformity defect. And if the chromaticity uniformity of the object to be detected is extremely small, indicating that the chromaticity of the corner image areas of the fifth image is the same or similar.
And determining uniformity defect parameters of the object to be detected according to the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
The average brightness of the central image area of any fifth image can be respectively compared with a preset average brightness threshold value to obtain a first comparison result;
comparing the brightness uniformity range of the object to be detected in any fifth image with a preset first range threshold value to obtain a second comparison result;
comparing the chromaticity uniformity range of the object to be detected in any fifth image with a preset second difference threshold value to obtain a third comparison result;
if the first comparison result, the second comparison result and the third comparison result of any fifth image indicate that the detection parameter is greater than or equal to the detection threshold, determining that the uniformity of the fifth image is qualified;
if at least one of the first comparison result, the second comparison result and the third comparison result of any fifth image indicates that the detection parameter is smaller than the detection threshold value, determining that uniformity detection of the fifth image is unqualified;
And after determining the uniformity detection result of each of the at least two fifth images, determining the uniformity detection result of the object to be detected.
Here, when the uniformity detection of at least two fifth images is qualified, the uniformity detection of the object to be detected is qualified, i.e. the object to be detected has no uniformity defect; and if at least one fifth image in the at least two fifth images is unqualified in uniformity detection, determining that the object to be detected is unqualified in uniformity detection.
The embodiment of the disclosure obtains a center image area and a corner image area of each of a plurality of fifth images, and determines average brightness and average chromaticity of the center image area and the corner image area of the fifth images; determining the brightness uniformity and the chromaticity uniformity of the corner image areas according to the average brightness ratio and the average chromaticity ratio between the corner image areas and the center image area, so as to determine the brightness uniformity range capable of reflecting the brightness difference among a plurality of corner image areas in the fifth image and the chromaticity uniformity range capable of reflecting the chromaticity difference among a plurality of corner image areas in the fifth image; the uniformity defect parameters of the object to be detected are determined according to the average brightness, the brightness uniformity range and the chromaticity uniformity range of the central image areas of the at least two fifth images, so that the detection of the light emitting uniformity of the light emitting element of the object to be detected in the second image is realized.
Optionally, the defect detection parameter includes: black spot defect parameters;
the determining, based on the brightness component and/or the chromaticity component of the plurality of image areas of any one of the at least two fifth images, the defect detection parameter of the object to be detected in any one of the fifth images includes:
dividing any one of the at least two fifth images into four first image areas with the same area based on the light spot center point;
dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the first sub-image area is any sub-image area of the plurality of sub-image areas, and the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
and determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images.
In an embodiment of the disclosure, the defect detection parameter may include: black spot defect parameters;
since the light emitting region of the light emitting element generally has a tendency to uniformly diverge from the center light emitting point toward the periphery, the brightness component of the pixel point in the flare image has a tendency to uniformly decrease from the center point of the flare toward the periphery. It will be appreciated that if the change between the brightness components of the pixels in two adjacent image areas in the speckle image does not exhibit a tendency to uniformly attenuate from the central point of the speckle to the periphery, it may be determined that there may be an abnormal brightness attenuation in at least one of the two adjacent image areas, i.e. there may be a black speckle image in the speckle image.
The black spot defect parameter of the fifth image is at least used for describing abnormal attenuation conditions of pixel points in the facula image of the fifth image;
after the at least two fifth images are acquired, the at least two fifth images may be respectively divided into 4 first image areas based on the spot center point; here, the areas of the 4 first image areas are the same, and the 4 first image areas do not overlap each other.
It should be noted that, because the 4 first image areas are image areas that are evenly divided based on the spot center point in the fifth image, and because the brightness component difference between any one pixel point in the first image area and the spot center point is positively correlated with the distance difference between the pixel point and the spot center point, the brightness component of the pixel point in each first image area, which is close to the spot center point, is greater than the brightness component of the pixel point in the first image area, which is far away from the spot center point.
Dividing each first image area into a plurality of sub-image areas, wherein the plurality of sub-image areas of the first image area are not overlapped with each other; determining a first sub-image region and a second sub-image region based on a position distribution of the plurality of sub-image regions within the first image region; wherein the first sub-image region and the second sub-image region are adjacent;
it should be noted that the first sub-image area may be any sub-image area of the plurality of sub-image areas; the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
Here, the distance of the sub-image region from the spot center point may be determined by the distance between the image center point of the sub-image region and the spot center point.
It will be appreciated that one sub-image region may be a first sub-image region, or may be a second sub-image region; for example, as shown in fig. 3, fig. 3 is a schematic diagram of a plurality of sub-image areas of a first image area, shown according to an exemplary embodiment. The first image area may be divided into 3*3 sub-image areas, where the sub-image area 1 is the sub-image area with the smallest distance from the center point of the light spot in the first image area, and the sub-image area 9 is the sub-image area with the largest distance from the center point of the light spot in the first image area. Wherein, when the sub-image area 1 is a first sub-image area, the sub-image area 5 may be a second sub-image area adjacent to the first sub-image area (i.e., sub-image area 1); when the sub-image area 5 is a first sub-image area, the sub-image area 9 may be a second sub-image area adjacent to the first sub-image area (i.e., sub-image area 5).
Respectively determining average brightness of the plurality of sub-image areas by acquiring brightness components of pixel points in the plurality of sub-image areas; determining a first brightness difference between any one first sub-image region and an adjacent second sub-image region in the plurality of sub-image regions based on the average brightness of the plurality of sub-image regions;
It should be noted that, since the brightness component of the pixel point close to the spot center point in the first image area is greater than the brightness component of the pixel point far from the spot center point in the first image area, and the distance between the second sub-image area and the spot center point is greater than the distance between the adjacent first sub-image area and the spot center point, the average brightness of any one of the first sub-image areas is greater than the average brightness of the adjacent second sub-image area, that is, the first brightness difference between the first sub-image area and the adjacent second sub-image area should be greater than the preset first brightness difference threshold.
Here, the first luminance difference threshold may be set according to actual requirements, which is not limited in the embodiment of the present disclosure, for example, the first luminance difference threshold may be 0.
It can be understood that the first brightness difference between any one first sub-image area and the adjacent second sub-image area in the first image area is greater than 0, which indicates that the change between the brightness components of the pixel points in the adjacent two sub-image areas in the first image area shows a trend of uniformly attenuating from the central point of the light spot to the periphery, i.e. no black spot image exists in the first image area.
If the first brightness difference between at least one first sub-image region and the adjacent second sub-image region in the first image region is smaller than or equal to 0, the change between brightness components of pixel points in the adjacent two sub-image regions in the first image region does not show a trend of uniformly attenuating from the central point of the light spot to the periphery, namely, a black spot image possibly exists in the first image region.
Accordingly, a plurality of first luminance differences of the first image region within each fifth image may be determined as the black spot defect parameter of the fifth image; comparing the first brightness differences in the fifth image with a first brightness difference threshold value respectively to obtain a plurality of comparison results; if the comparison results indicate that the first brightness difference is larger than the first brightness difference threshold value, determining that the black spot of the fifth image is qualified;
and if at least one comparison result exists in the plurality of comparison results, indicating that the first brightness difference is smaller than or equal to the first brightness difference threshold value, and determining that the black spot detection of the fifth image is unqualified.
According to the black spot detection results of the at least two fifth images, if the black spot detection of the at least two fifth images is qualified, the black spot detection of the object to be detected in the second image is qualified; if the black spot detection of at least one fifth image in the at least two fifth images is unqualified, the black spot detection of the object to be detected in the second image is unqualified, and the light-emitting element of the object to be detected has a black spot defect.
According to the embodiment of the disclosure, the spot center point of each fifth image in the plurality of fifth images is taken as the center, the fifth image is divided into 4 first image areas with the same area, each first image area is divided into a plurality of sub-image areas which are not overlapped with each other, the first brightness difference between any one first sub-image area and the adjacent second sub-image area in the plurality of sub-image areas is determined, and as the first brightness difference can reflect the brightness change trend between the first sub-image area and the adjacent second sub-image area, whether the brightness components of the pixel points in the plurality of fifth images show the distribution trend which is uniformly reduced from the spot center point to the periphery or not is determined according to the plurality of first brightness differences of the first sub-image areas in the plurality of fifth images, so that the black spot detection of the luminous element of the object to be detected in the second image is realized.
Optionally, the defect detection parameter includes: black edge defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
Dividing any one of the at least two fifth images into two second image areas with the same area based on the light spot center point;
uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
determining a second luminance difference between a third sub-image region and a fourth sub-image region of the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the light spot center point is larger than the distance between the third sub-image area and the light spot center point;
determining a third luminance difference between the third sub-image region and a fifth sub-image region based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
And determining the black edge defect parameter of the object to be detected based on a plurality of second brightness differences and a plurality of third brightness differences of a second image area in any one of the at least two fifth images.
In an embodiment of the disclosure, the defect detection parameter may include: black edge defect parameters;
since the light emitting region of the light emitting element generally has a tendency to uniformly diverge from the center light emitting point toward the periphery, the brightness component of the pixel point in the flare image has a tendency to uniformly decrease from the center point of the flare toward the periphery. It will be appreciated that if the change between the brightness components of the pixels in two adjacent rows or two adjacent columns in the spot image does not show a tendency to uniformly attenuate from the center point of the spot toward the periphery, it may be determined that there may be abnormal brightness attenuation of at least one row or at least one column of pixels in two adjacent rows or two columns, that is, there may be a black image in the spot image.
The black edge defect parameter of the fifth image is at least used for describing abnormal attenuation conditions of at least one row or at least one column of pixel points in the facula image of the fifth image;
after the at least two fifth images are acquired, the at least two fifth images may be respectively divided into 2 second image areas based on the spot center point; here, the areas of the 2 second image areas are the same, and the 2 second image areas do not overlap each other.
It should be noted that, since the 2 second image areas are image areas that are evenly divided based on the spot center point in the fifth image, the brightness component difference between any one row or any one column of pixel points and the spot center point in the second image area is positively correlated with the distance difference between the pixel points and the spot center point, so that the brightness component of the pixel row (or the pixel column) close to the spot center point in each second image area is greater than the brightness component of the pixel row (or the pixel column) far from the spot center point in the second image area.
Each of the second image areas is divided into a plurality of sub-image areas uniformly in the lateral or longitudinal direction, wherein the plurality of sub-image areas have the same area and do not overlap with each other.
And determining a third sub-image area, a fourth sub-image area and a fifth sub-image area based on the position distribution of the plurality of sub-image areas in the two second image areas.
Wherein the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in a second image area; the fifth sub-image region and the third sub-image region are respectively two sub-image regions within different second image regions.
It should be noted that the third sub-image area may be any sub-image area of the plurality of sub-image areas of the second image area; the distance between the fourth sub-image area and the light spot center point is greater than the distance between the third sub-image area and the light spot center point; the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point. It will be appreciated that the third sub-image area and the fifth sub-image area are symmetrical along a dividing line between the two second image areas.
Respectively determining average brightness of a plurality of sub-image areas by acquiring brightness components of pixel points in the plurality of sub-image areas, and respectively determining second brightness differences between any one third sub-image area and adjacent fourth sub-image areas in the plurality of sub-image areas based on the average brightness of the plurality of sub-image areas; and a third luminance difference between the third sub-image region and the fifth sub-image region.
It should be noted that, since the light emitting area of the light emitting element generally shows a tendency of uniformly diverging from the central light emitting point toward the periphery, the brightness components of the pixel points in the flare image show a tendency of uniformly decreasing from the flare central point toward the periphery; the average luminance of the third sub-image area should be larger than the average luminance of the fourth sub-image area adjacent thereto; the average brightness of the third sub-image area should be the same as or similar to the average brightness of the fifth sub-image area.
Determining second brightness differences between a plurality of third sub-image areas and adjacent fourth sub-image areas in the fifth image and third brightness differences between the plurality of third sub-image areas and symmetrical fifth sub-image areas, comparing the plurality of second brightness differences with preset second brightness difference thresholds respectively, and comparing the plurality of third brightness differences with preset third brightness difference thresholds respectively;
here, the second luminance difference threshold value and the third luminance difference threshold value may be set according to actual requirements, which is not limited in the embodiment of the present disclosure, for example, the second luminance difference threshold value and the third luminance difference threshold value are both 0.
It can be understood that the second brightness difference between any one third sub-image area and the adjacent fourth sub-image area in the second image area is greater than 0, and the third brightness difference between the third sub-image area and the symmetrical fifth sub-image area is less than or equal to 0, which indicates that the change between the brightness components of the pixel points in the adjacent two sub-image areas in any one second image area shows a trend of uniformly attenuating from the central point of the light spot to the periphery, i.e. no black image exists in the second image area.
Accordingly, a plurality of second luminance differences and a plurality of third luminance differences of the second image region within each fifth image may be determined as black edge defect parameters of the fifth image; comparing the plurality of second brightness differences in the fifth image with a second brightness difference threshold value respectively, and comparing the plurality of third brightness differences with a third brightness difference threshold value respectively to obtain a plurality of comparison results; if the comparison results indicate that the second brightness differences are all larger than the second brightness difference threshold value and the third brightness differences are all smaller than or equal to the third brightness difference threshold value, determining that the black edge of the fifth image is qualified in detection;
and if at least one comparison result exists in the plurality of comparison results, indicating that the second brightness difference is smaller than or equal to the second brightness difference threshold value, and/or the third brightness difference is larger than the third brightness difference threshold value, determining that the black edge detection of the fifth image is unqualified.
According to the black edge detection results of the at least two fifth images, if the black edge detection of the at least two fifth images is qualified, the black edge detection of the object to be detected in the second image is qualified; if the black edge detection of at least one fifth image in the at least two fifth images is not qualified, the black edge detection of the object to be detected in the second image is not qualified, and the light-emitting element of the object to be detected has a black edge defect.
In the embodiment of the disclosure, a spot center point of each fifth image in the plurality of fifth images is taken as a center, the fifth image is divided into 2 second image areas with the same area, each second image area is divided into a plurality of sub-image areas which are not overlapped with each other along a transverse direction or a longitudinal direction, a second brightness difference between any one third sub-image area and an adjacent fourth sub-image area in the plurality of sub-image areas is determined, and a third brightness difference between the third sub-image area and a symmetrical fifth sub-image area is determined; the second brightness difference can reflect the brightness change trend between two adjacent sub-image areas in the same second image area, and the third brightness difference can reflect the brightness change trend between two symmetrical sub-image areas in different second image areas; and determining whether brightness components of pixel rows or pixel columns in the fifth images show a distribution trend of uniformly decreasing from the center point of the light spot to the periphery according to the second brightness differences and the third brightness differences of the second sub-image areas in the fifth images, so that black edge detection of the light emitting element of the object to be detected in the second images is realized.
Optionally, based on the acquired first image, acquiring a second image of the object to be measured includes:
acquiring a template image of the object to be detected, and determining the position information of the object to be detected in the first image based on the template image;
acquiring interception position information of an image interception window in the first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
based on the position offset information, adjusting the position of the image capturing window;
and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
In the embodiment of the disclosure, a template image of the object to be measured may be acquired, and the position information of the object to be measured may be determined from the first image based on the template image of the object to be measured.
Here, the template image is at least used for describing contour information of the object to be measured;
it can be appreciated that, since the processing objects of the same production line may include a plurality of different objects to be measured, in consideration of possible differences in structures of the different objects to be measured, in order to achieve accurate positioning of the objects to be measured, a template image of the objects to be measured may be acquired, and the objects to be measured may be positioned in the first image based on contour information of the objects to be measured indicated by the template image.
Acquiring interception position information of the image interception window, determining position offset information of the image interception window relative to the object to be detected according to the position information of the object to be detected and the interception position information of the image interception window, and adjusting the position of the image interception window based on the position offset information; and enabling the object to be detected in the first image to be located in the image interception window, so that the second image is acquired from the first image by utilizing the image interception window.
It should be noted that, an image capturing window may be disposed in the first image, and capturing position information of the image capturing window may be preset based on position information of the object to be detected in the first image, so that each collected first image is captured by using the image capturing window, and a second image where the object to be detected is located is obtained.
Since the interception position information of the image interception window is preset, that is, the interception position information is fixed, the second image intercepted only partially contains the object to be detected due to the fact that the placement position of the object to be detected may deviate, and subsequent processing is not facilitated.
The position information of the object to be detected is used for indicating the actual position of the object to be detected in the first image, and the interception position information of the image interception window is used for indicating the predicted position of the object to be detected in the first image; in an ideal case, the position information of the object to be measured should be the same as the interception position information of the image interception window, but considering that there may be an offset in the placement position of the object to be measured, the position information of the object to be measured should be different from the interception position information of the image interception window, that is, the actual position of the object to be measured in the first image is different from the predicted position of the object to be measured in the first image.
In order to intercept a second image containing a complete object to be detected from a first image, the embodiment of the disclosure determines position offset information according to position information of the object to be detected and interception position information of the image interception window; and adjusting the position of the image intercepting window based on the position offset information, so that the intercepting position information of the adjusted image intercepting window is the same as the position information of the object to be detected, and obtaining a second image containing the object to be detected by using the image intercepting window.
The present disclosure also provides the following embodiments:
as shown in fig. 4, fig. 4 is a second flowchart illustrating a detection method according to an exemplary embodiment, where the method includes:
step S201, a template image of the object to be detected is obtained, and the position information of the object to be detected in the first image is determined based on the template image;
step S202, acquiring interception position information of an image interception window in a first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
in the embodiment of the disclosure, the image capturing window may be a preset ROI window;
step S203, based on the position offset information, adjusting the position of the image capturing window; and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
Step S204, dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
in the embodiment of the disclosure, the second image may be converted into a gray-scale image, and the gray-scale image may be traversed by using a sliding window with a preset size to obtain a plurality of image areas; and respectively determining the gray average value of the plurality of image areas according to the gray value of each pixel point in the plurality of image areas.
Step S205, determining the gray level range of the second image based on the gray level average value of the plurality of image areas;
the maximum gray average value and the minimum gray average value can be determined according to the gray average values of the image areas; and determining that the gray level of the second image is extremely poor based on the difference between the maximum gray level average value and the minimum gray level average value.
The gradation range is a range in which the brightness change in the second image can be reflected.
Step S206, determining a target application scene where the object to be detected is currently located based on the gray level range of the second image;
it should be noted that, on a production line of the terminal device, application scenes for color detection may be classified into three types; wherein, the first scene is: judging the color of the terminal equipment and assembling and detecting the materials in the terminal equipment; the second scene is: a brightness detection scene of a light emitting element (such as a flash, a power light, and/or a soft light, etc.) within the terminal device; the third scenario is: a defect detection scenario for a light emitting element within a terminal device.
And determining a target application scene of the object to be detected in the second image according to the gray level range of the second image.
If the gray level difference of the second image is less than 15, determining that the brightness of the test environment where the object to be tested is located in the second image is uniform, wherein the target application scene is a first scene;
if the gray level difference of the second image is greater than 30, determining that the brightness of the test environment where the object to be tested is located in the second image is uneven, wherein the target application scene is a second scene;
and if the gray level of the second image is extremely greater than or equal to 15 and less than or equal to 30, determining the target application scene as a third scene.
Step S207, determining target detection parameters of the object to be detected based on the second image and the target application scene; and if the target detection parameter is greater than or equal to a preset detection threshold, determining that the object to be detected in the target application scene is qualified in detection.
Here, the target detection parameters corresponding to different application scenes are different; the preset detection threshold values are also different; it can be understood that, due to different detection of the object to be detected in different application scenarios, the obtained target detection parameters may also be different.
After the target application scene is determined, acquiring the target detection parameters based on the second image; and comparing the target detection parameter with a detection threshold corresponding to the target application scene, and determining whether the object to be detected in the target application scene is detected to be qualified or not.
Illustratively, as shown in fig. 5, fig. 5 is a schematic flow diagram illustrating discrimination of a target application scenario according to an exemplary embodiment.
As shown in fig. 6, fig. 6 is a flowchart illustrating a method for detecting an object to be detected in a first scenario according to an exemplary embodiment. The method comprises the following steps:
step S301, obtaining channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
in an embodiment of the present disclosure, a plurality of color template images may be acquired and the second image and the plurality of color template images are converted into images of an RGB color model; respectively determining gray average values of the three channel images corresponding to the second image based on the three channel images corresponding to the second image; and respectively determining the gray average value of the three channel images corresponding to each color template image based on the three channel images corresponding to each color template image in the plurality of color template images.
In some embodiments, if the second image is a gray scale image, directly determining a gray scale average of the second image; and converting the plurality of color template images into gray scale images, and determining the gray scale average value of the plurality of color template images.
Step S302, determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
step S303, normalizing the mean square error value between the second image and each of the plurality of color template images; and respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
Here, the mean square error value between the second image and the plurality of color template images may be normalized based on a preset normalization coefficient.
The normalized coefficient corresponding to the second image in RGB format is different from the normalized coefficient corresponding to the second image in gray format.
For example, if the second image is a grayscale image, the normalization coefficient may be 255; if the second image is an RGB image, the normalization factor may be 255×3.
Step S304, determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
Determining the color similarity of each color template image in the second image and the plurality of color template images, and determining the color parameter corresponding to the color template image with the largest color similarity as the color parameter of the object to be detected; and acquiring target color configuration information matched with the color parameters based on the color parameters.
Step S305, at least one third image is acquired from the second image; the third image is an image area where the material to be detected is located in the second image; acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
the contour information of the material to be measured in the object to be measured can be obtained, the image area where the material to be measured is located is determined from the second image based on the contour matching mode, and the image area where the material to be measured is located is intercepted, so that a third image is obtained.
Here, considering that the image area of the material to be measured may include information such as a contour of the material to be measured, when performing color detection, a central image area of the image area where the material to be measured is located may be directly cut, and the central image area is determined as the third image.
Step S306, determining the gray average value and the gray range of the material to be tested in the at least one third image based on the at least one third image;
step S307, determining a color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image;
step S308, determining whether the detection of the object to be detected in the first scene is qualified or not based on the gray average value, the gray range and the color similarity of the material to be detected in the at least one third image and a preset detection threshold range.
After the gray average value, the gray range and the color similarity of the material to be detected in the third image are obtained, the gray average value, the gray range and the color similarity can be respectively compared with the corresponding detection threshold range;
if the second image is an RGB image, the gray average value and the gray range of the material to be measured may include a channel gray average value and a channel gray range of three channels; comparing 7 detection parameters (3 channel gray average values, 3 channel gray extremely poor and 1 color similarity) of the third image with a preset detection threshold range; if at least 4 detection parameters exist in 7 detection parameters of the third image and are in the detection threshold range, determining that the material to be detected of the object to be detected is not mixed or the material to be detected is attached; otherwise, determining that the color of the material to be tested of the object to be tested is mixed or the material to be tested is not attached.
If the second image is a gray image, comparing 3 detection parameters (gray mean value, gray range and color similarity) of a third image with a preset detection threshold range; if at least 2 detection parameters exist in the 3 detection parameters of the third image and are in the detection threshold range, determining that the material to be detected of the object to be detected is not mixed or the material to be detected is attached; otherwise, determining that the color of the material to be tested of the object to be tested is mixed or the material to be tested is not attached.
As shown in fig. 7, fig. 7 is a schematic flow chart of color discrimination and material assembly detection of a terminal device in a first scenario according to an exemplary embodiment.
As shown in fig. 8, fig. 8 is a flowchart of a method for detecting an object to be detected in a second scenario according to an exemplary embodiment. The method comprises the following steps:
step S401, determining a gray level histogram corresponding to the second image based on the gray level value of each pixel point in the second image; determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image;
in the embodiment of the disclosure, the second image may be converted into a gray image, and a gray histogram corresponding to the second image may be determined based on the gray image.
In some embodiments, the second image is converted into a gray image, the gray image is subjected to mean filtering, and a gray histogram corresponding to the second image is determined based on the mean filtered gray image.
The gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
step S402, determining a target gray level with the maximum gray level based on the gray values corresponding to at least two gray levels in the first gray level; determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value;
step S403, performing binarization processing on the second image based on the target threshold value, to obtain a binarized image of the second image;
step S404, performing morphological open operation processing on the binarized image to obtain a processed image; performing contour detection on the processed image to determine a plurality of contour images in the processed image; determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
Step S405, determining the coverage area of the target image contour within the processed image; acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
after determining the coverage area of the target image contour in the processing image, setting the pixel values of a plurality of pixel points outside the coverage area in the second image to be 0, and setting the pixel values of a plurality of pixel points inside the coverage area in the second image to be channel gray values of three channels at corresponding positions of the second image in RGB format; and determining a fourth image according to the pixel values of the plurality of pixel points.
Step S406, determining an illumination brightness parameter of the object to be tested based on the fourth image;
determining a channel gray average value of the fourth image on three channels based on the three channel gray values of each pixel point in the fourth image; and determining the gray average value of three channels of the fourth image as the illumination brightness parameter of the object to be detected.
Step S407, determining whether the detection of the object to be detected in the second scene is qualified or not based on the illumination brightness parameter of the object to be detected and the preset detection threshold range.
Comparing the gray average values of the three channels of the fourth image with a preset detection threshold range respectively, and if the gray average values of the three channels are all in the detection threshold range, determining that the brightness of the object to be detected is qualified; if at least one channel gray average value in the three channel gray average values is not in the detection threshold range, determining that the brightness detection of the object to be detected is unqualified.
Illustratively, as shown in fig. 9, fig. 9 is a schematic flow chart of detecting brightness of a light emitting element of a terminal device in a second scenario according to an exemplary embodiment.
As shown in fig. 10, fig. 10 is a flowchart illustrating a method for detecting an object to be detected in a third scenario according to an exemplary embodiment. The method comprises the following steps:
step S501, determining a spot center point of a spot image formed by the object to be measured in the second image based on brightness components of pixel points in the second image;
in the embodiment of the disclosure, the second image may be converted into an HSV format image, the size of an image area of equal proportion is set according to the size and aspect ratio of the HSV format image, and the HSV format image is divided into a plurality of image areas with the same area according to the size of the image area; acquiring brightness components of pixel points in each image area, and determining average brightness of each image area; and determining an image area with the maximum average brightness as a central area of the HSV format image, and determining a central point of the central area as a spot central point of the spot image.
Step S502, acquiring at least two fifth images from the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
here, 0.7FOV and 1.0FOV of the second image are determined centering on the spot center point, and images of the two fields of view are taken from within the second image as a fifth image.
Step S503, dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
here, the size of the image area is set in the equal-width-to-height ratio of the fifth image, and the fifth image is divided into a plurality of image areas based on the size of the image area; and determining a center image area and a corner image area from the plurality of image areas.
Step S504, determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any one of the fifth images based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any one of the fifth images; determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
Step S505, determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area; determining the uniformity defect parameter of the object to be detected based on average brightness of a central image area of any one of the at least two fifth images, brightness uniformity range and chromaticity uniformity range of the object to be detected; the uniformity defect parameter is used for determining whether uniformity defects exist in the light-emitting element in the object to be detected;
here, the average brightness of the central image area, the brightness uniformity range and the chromaticity uniformity range of the object to be detected can be respectively compared with a preset detection threshold, and if the average brightness, the brightness uniformity range and the chromaticity uniformity range are all larger than the detection threshold, it is determined that the uniformity defect does not exist in the light-emitting element in the object to be detected; otherwise, it can be determined that the light-emitting element in the object to be detected has uniformity defect.
Step S506, dividing any one of the at least two fifth images into four first image areas with the same area based on the spot center point; dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
Here, the fifth image is divided into 4 first image areas (i.e., upper left, upper right, lower left, lower right) on average with the spot center point as the center;
image rotation can be performed on the plurality of first image areas, so that the 4 first image areas are all transformed into the brightest upper left corner; and dividing each first image region into a plurality of sub-image regions that do not overlap each other.
Step S507 of determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images; the black spot defect parameter is used for determining whether a black spot defect exists in a light-emitting element in the object to be detected;
here, the first sub-image region is any one of the plurality of sub-image regions, and the distance between the second sub-image region and the spot center point is greater than the distance between the adjacent first sub-image region and the spot center point;
the average luminance of each sub-image region may be determined based on the luminance components of the plurality of sub-image regions; determining a first brightness difference between each sub-image region (namely a first sub-image region) and a sub-image region (namely a second sub-image region) adjacent to the lower right side of the first sub-image region in a traversing manner;
It can be understood that, since the rotated first image areas are brightest at the upper left corner, the average brightness of each sub-image area is greater than the average brightness of the adjacent sub-image areas at the lower right side, i.e. the first brightness difference between the two sub-image areas cannot be less than 0; and if the first brightness difference between the two sub-image areas is smaller than or equal to 0, indicating that the first sub-image area is a black spot area.
Step S508, dividing any one of the at least two fifth images into two second image areas with the same area based on the spot center point; uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
the fifth image can be divided into two second image areas (namely an upper image area and a lower image area) with the center point of the light spot as the center; transversely dividing the two second image areas into a plurality of sub-image areas (namely transverse image areas) with the same area respectively;
step S509 of determining a second luminance difference between a third sub-image region and a fourth sub-image region among the plurality of sub-image regions and a third luminance difference between the third sub-image region and a fifth sub-image region based on the brightness components of the plurality of sub-image regions;
Here, the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the center point of the light spot is greater than the distance between the third sub-image area and the center point of the light spot;
the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
the brightness components of pixel points in the plurality of sub-image areas can be obtained, and the average brightness of the plurality of sub-image areas is determined; determining a second luminance difference between each sub-image region (i.e., the third sub-image region) and an adjacent fourth sub-image region, respectively, in a traversal manner; a third luminance difference between each sub-image region (i.e. the third sub-image region) and the symmetrical fifth sub-image region is determined.
Step S510, determining the black edge defect parameter of the object to be detected based on a plurality of second luminance differences and a plurality of third luminance differences of a second image area in any one of the at least two fifth images; the black edge defect parameter is used for determining whether a black edge defect exists in the light-emitting element in the object to be detected.
It can be understood that, since the sub-image areas closer to the center point of the light spot are brighter in the plurality of second image areas, when determining whether the light emitting element of the object to be measured has a black edge defect, it should be determined whether the second brightness difference and the third brightness difference simultaneously satisfy a preset condition;
here, if the second brightness difference is smaller than 0 and the absolute value of the second brightness difference is smaller than a preset detection threshold, determining that the second brightness difference meets a preset condition;
and if the absolute value of the third brightness difference is smaller than the preset detection threshold value, determining that the third brightness difference meets the preset condition.
When the second brightness difference and the third brightness difference simultaneously meet preset conditions, determining that the light-emitting element of the object to be detected has no black edge defect; and if at least one of the second brightness difference and the third brightness difference does not meet a preset condition, determining that the light-emitting element of the object to be detected has a black edge defect.
Illustratively, as shown in fig. 11, fig. 11 is a schematic flow chart illustrating defect detection of a light emitting element of a terminal device in a third scenario according to an exemplary embodiment.
The embodiment of the disclosure also provides a detection device. Fig. 12 is a schematic structural diagram of a detection apparatus according to an exemplary embodiment, and as shown in fig. 12, the apparatus is applied to a terminal device, and the detection apparatus 100 includes:
An acquiring module 101, configured to acquire a second image in which an object to be measured is located based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
a first determining module 102, configured to determine, according to the second image, a target application scenario where the object to be measured is currently located;
a second determining module 103, configured to determine a target detection parameter of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
and the detection module 104 is configured to determine that the object to be detected in the target application scene is detected to be qualified if the target detection parameter meets a preset detection condition.
Optionally, the first determining module 102 is configured to:
dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
determining a maximum gray average value and a minimum gray average value in the plurality of image areas based on the gray average values of the plurality of image areas;
determining the gray level range of the second image according to the maximum gray level average value and the minimum gray level average value;
Determining a current target application scene of the object to be detected based on the gray level range of the second image; wherein, the application scenes corresponding to different gray level range are different.
Optionally, the second determining module 103 is configured to:
if the target application scene is a first scene, determining color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images;
determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
and determining color detection parameters of at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information.
Optionally, the color detection parameters include at least: gray average, gray range and color similarity;
the second determining module 103 is configured to:
acquiring at least one third image from the second image; the third image is an image area where the material to be detected is located in the second image;
Acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
determining the gray average value and the gray range of the material to be detected in the at least one third image based on the at least one third image;
and determining the color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image.
Optionally, the second determining module 103 is configured to:
acquiring channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
normalizing the mean square error value between the second image and each of the plurality of color template images;
And respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
Optionally, the second determining module 103 is configured to:
if the target application scene is a second scene, determining a target threshold of the second image;
based on the target threshold, performing binarization processing on the second image to obtain a binarized image of the second image;
performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image;
acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
and determining the illumination brightness parameter of the object to be detected based on the fourth image.
Optionally, the second determining module 103 is configured to:
determining a gray histogram corresponding to the second image based on the gray value of each pixel point in the second image;
Determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image; wherein the gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
determining a target gray level with the maximum gray level based on gray values corresponding to at least two gray levels in the first class of gray levels;
and determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value.
Optionally, the second determining module 103 is configured to:
morphological operation processing is carried out on the binarized image to obtain a processed image;
performing contour detection on the processed image to determine a plurality of contour images in the processed image;
determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
the coverage area of the target image contour within the processed image is determined.
Optionally, the second determining module 103 is configured to:
if the target application scene is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on brightness components of pixel points in the second image;
Acquiring at least two fifth images from within the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
and determining defect detection parameters of the object to be detected in any one of the at least two fifth images based on brightness components and/or chromaticity components of a plurality of image areas of any one of the at least two fifth images.
Optionally, the defect detection parameter includes: uniformity defect parameters;
the second determining module 103 is configured to:
dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
respectively determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any fifth image based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any fifth image;
determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
Determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area;
and determining the uniformity defect parameter of the object to be detected based on the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
Optionally, the defect detection parameter includes: black spot defect parameters;
the second determining module 103 is configured to:
dividing any one of the at least two fifth images into four first image areas with the same area based on the light spot center point;
dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the first sub-image area is any sub-image area of the plurality of sub-image areas, and the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
And determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images.
Optionally, the defect detection parameter includes: black edge defect parameters;
the second determining module 103 is configured to:
dividing any one of the at least two fifth images into two second image areas with the same area based on the light spot center point;
uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
determining a second luminance difference between a third sub-image region and a fourth sub-image region of the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the light spot center point is larger than the distance between the third sub-image area and the light spot center point;
determining a third luminance difference between the third sub-image region and a fifth sub-image region based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
And determining the black edge defect parameter of the object to be detected based on a plurality of second brightness differences and a plurality of third brightness differences of a second image area in any one of the at least two fifth images.
Optionally, the acquiring module 101 is configured to:
acquiring a template image of the object to be detected, and determining the position information of the object to be detected in the first image based on the template image;
acquiring interception position information of an image interception window in the first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
based on the position offset information, adjusting the position of the image capturing window;
and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (16)

1. A method of detection, the method comprising:
acquiring a second image of the object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
determining a target application scene where the object to be detected is currently located according to the second image;
determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
and if the target detection parameters meet preset detection conditions, determining that the object to be detected in the target application scene is qualified in detection.
2. The method according to claim 1, wherein determining, according to the second image, a target application scenario in which the object to be measured is currently located, includes:
dividing the second image into a plurality of image areas, and determining a gray average value of each image area;
Determining a maximum gray average value and a minimum gray average value in the plurality of image areas based on the gray average values of the plurality of image areas;
determining the gray level range of the second image according to the maximum gray level average value and the minimum gray level average value;
determining a current target application scene of the object to be detected based on the gray level range of the second image; wherein, the application scenes corresponding to different gray level range are different.
3. The method according to claim 1, wherein determining the target detection parameter of the object to be detected based on the second image and the target application scene comprises:
if the target application scene is a first scene, determining color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images;
determining color parameters of the object to be detected based on the color similarity between the second image and the plurality of color template images; the color parameters are used for determining target color configuration information of the object to be detected;
and determining color detection parameters of at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information.
4. A method according to claim 3, wherein the color detection parameters comprise at least: gray average, gray range and color similarity;
the determining the color detection parameter of the at least one material to be detected of the object to be detected based on the gray value of the at least one material to be detected of the object to be detected in the second image and the target color configuration information includes:
acquiring at least one third image from the second image; the third image is an image area where the material to be detected is located in the second image;
acquiring a target color image of at least one material to be detected in the object to be detected based on the target color configuration information of the object to be detected;
determining the gray average value and the gray range of the material to be detected in the at least one third image based on the at least one third image;
and determining the color similarity between the material to be measured in the at least one third image and the target color image based on the at least one third image and the target color image of the material to be measured in the third image.
5. The method of claim 3, wherein the determining a color similarity between the second image and the plurality of color template images based on the second image and the plurality of color template images comprises:
Acquiring channel gray scale average values of the second image and the plurality of color template images on three channels of a color model;
determining a mean square error value between the second image and each of the plurality of color template images based on the three channel gray scale averages of the second image and the three channel gray scale averages of each of the plurality of color template images;
normalizing the mean square error value between the second image and each of the plurality of color template images;
and respectively determining the color similarity between the second image and each color template image in the plurality of color template images based on the plurality of mean square error values after normalization processing.
6. The method according to claim 1, wherein determining the target detection parameter of the object to be detected based on the second image and the target application scene comprises:
if the target application scene is a second scene, determining a target threshold of the second image;
based on the target threshold, performing binarization processing on the second image to obtain a binarized image of the second image;
Performing contour detection on the binarized image to obtain a target image contour of the object to be detected in the binarized image and a coverage area of the target image contour in the binarized image;
acquiring a fourth image from the second image based on the coverage area; the fourth image is an image area which is overlapped with the coverage area in the second image;
and determining the illumination brightness parameter of the object to be detected based on the fourth image.
7. The method of claim 6, wherein the determining the target threshold for the second image comprises:
determining a gray histogram corresponding to the second image based on the gray value of each pixel point in the second image;
determining a first class of gray levels comprising at least two gray levels based on the gray histogram of the second image; wherein the gray histogram includes: a first class of gray levels and a second class of gray levels; the frequency corresponding to any gray level in the first class of gray levels is greater than the frequency corresponding to any gray level in the second class of gray levels;
determining a target gray level with the maximum gray level based on gray values corresponding to at least two gray levels in the first class of gray levels;
And determining the target threshold value of the second image according to the difference value between the gray value corresponding to the target gray level and a preset threshold value.
8. The method according to claim 6, wherein the performing contour detection on the binarized image to obtain a target image contour of the object to be measured in the binarized image and a coverage area of the target image contour in the binarized image includes:
morphological operation processing is carried out on the binarized image to obtain a processed image;
performing contour detection on the processed image to determine a plurality of contour images in the processed image;
determining the outline of the target image with the largest outline area based on the outline areas of the plurality of outline images;
the coverage area of the target image contour within the processed image is determined.
9. The method according to claim 1, wherein determining the target detection parameter of the object to be detected based on the second image and the target application scene comprises:
if the target application scene is a third scene, determining a spot center point of a spot image formed by the object to be detected in the second image based on brightness components of pixel points in the second image;
Acquiring at least two fifth images from within the second image based on the spot center point of the spot image; wherein, the angles of view corresponding to any two fifth images in the at least two fifth images are different;
and determining defect detection parameters of the object to be detected in any one of the at least two fifth images based on brightness components and/or chromaticity components of a plurality of image areas of any one of the at least two fifth images.
10. The method of claim 9, wherein the defect detection parameters include: uniformity defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into a plurality of image areas; wherein the plurality of image areas includes at least: a center image area and a corner image area;
respectively determining average brightness and average chromaticity corresponding to the center image area and the corner image area in any fifth image based on brightness components and chromaticity components of pixel points in the center image area and the corner image area of any fifth image;
Determining the brightness uniformity and the chromaticity uniformity of the corner image area based on the ratio of the average brightness and the average chromaticity of the corner image area to the average brightness and the average chromaticity of the center image area;
determining the brightness uniformity and the chromaticity uniformity of the object to be detected in any fifth image according to the brightness uniformity and the chromaticity uniformity of the corner image area;
and determining the uniformity defect parameter of the object to be detected based on the average brightness of the central image area of any one of the at least two fifth images, the brightness uniformity range and the chromaticity uniformity range of the object to be detected.
11. The method of claim 9, wherein the defect detection parameters include: black spot defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into four first image areas with the same area based on the light spot center point;
Dividing any one of the four first image areas into a plurality of sub-image areas which are not overlapped with each other;
determining a first luminance difference between a first sub-image region and an adjacent second sub-image region within the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the first sub-image area is any sub-image area of the plurality of sub-image areas, and the distance between the second sub-image area and the light spot center point is larger than the distance between the adjacent first sub-image area and the light spot center point;
and determining the black spot defect parameter of the object to be detected based on a plurality of first brightness differences of a first image area in any one of the at least two fifth images.
12. The method of claim 9, wherein the defect detection parameters include: black edge defect parameters;
the determining, based on the brightness components and/or the chromaticity components of the plurality of image areas of any one of the at least two fifth images, defect detection parameters of the object to be detected in any one of the at least two fifth images includes:
dividing any one of the at least two fifth images into two second image areas with the same area based on the light spot center point;
Uniformly dividing the two second image areas into a plurality of sub-image areas along the transverse direction or the longitudinal direction;
determining a second luminance difference between a third sub-image region and a fourth sub-image region of the plurality of sub-image regions based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fourth sub-image area are two adjacent sub-image areas in the same second image area, and the distance between the fourth sub-image area and the light spot center point is larger than the distance between the third sub-image area and the light spot center point;
determining a third luminance difference between the third sub-image region and a fifth sub-image region based on the luminance components of the plurality of sub-image regions; the third sub-image area and the fifth sub-image area are respectively two sub-image areas in different second image areas, and the distance between the third sub-image area and the light spot center point is the same as the distance between the fifth sub-image area and the light spot center point;
and determining the black edge defect parameter of the object to be detected based on a plurality of second brightness differences and a plurality of third brightness differences of a second image area in any one of the at least two fifth images.
13. The method of claim 1, wherein acquiring a second image of the subject based on the acquired first image comprises:
acquiring a template image of the object to be detected, and determining the position information of the object to be detected in the first image based on the template image;
acquiring interception position information of an image interception window in the first image, and determining position offset information according to the interception position information of the image interception window and the position information of the object to be detected;
based on the position offset information, adjusting the position of the image capturing window;
and acquiring a second image of the object to be detected from the first image by utilizing the adjusted image intercepting window.
14. A detection device, the device comprising:
the acquisition module is used for acquiring a second image of the object to be detected based on the acquired first image; the second image is an image area which at least contains the object to be detected in the first image;
the first determining module is used for determining a current target application scene of the object to be detected according to the second image;
the second determining module is used for determining target detection parameters of the object to be detected based on the second image and the target application scene; wherein, the target detection parameters of the object to be detected under different target application scenes are different;
And the detection module is used for determining that the object to be detected in the target application scene is qualified in detection if the target detection parameter meets a preset detection condition.
15. A detection apparatus, characterized by comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to: the detection method of any one of claims 1 to 13 when executed by executable instructions stored in the memory.
16. A non-transitory computer readable storage medium, which when executed by a processor of a detection apparatus, causes the detection apparatus to perform the detection method of any one of claims 1 to 13.
CN202310118601.1A 2023-01-31 2023-01-31 Detection method, detection device and storage medium Pending CN116797915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310118601.1A CN116797915A (en) 2023-01-31 2023-01-31 Detection method, detection device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310118601.1A CN116797915A (en) 2023-01-31 2023-01-31 Detection method, detection device and storage medium

Publications (1)

Publication Number Publication Date
CN116797915A true CN116797915A (en) 2023-09-22

Family

ID=88035289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310118601.1A Pending CN116797915A (en) 2023-01-31 2023-01-31 Detection method, detection device and storage medium

Country Status (1)

Country Link
CN (1) CN116797915A (en)

Similar Documents

Publication Publication Date Title
JP4643715B2 (en) Automatic detection and correction of defects caused by non-red eye flash
WO2015070723A1 (en) Eye image processing method and apparatus
CN108389215B (en) Edge detection method and device, computer storage medium and terminal
CN112991302A (en) Flexible IC substrate color-changing defect detection method and device based on super-pixels
CN111583258B (en) Defect detection method, device, system and storage medium
CN111724375B (en) Screen detection method and system
WO2018010386A1 (en) Method and system for component inversion testing
CN115561247A (en) Automatic visual inspection system for appearance of electronic component
CN115917595A (en) Surface defect detection system
CN115424008A (en) Method and system for detecting light modulation and focusing of laser projector production line
CN113785181A (en) OLED screen point defect judgment method and device, storage medium and electronic equipment
KR20150075728A (en) An apparatus for extracting characters and the method thereof
JP4242796B2 (en) Image recognition method and image recognition apparatus
CN114937003A (en) Multi-type defect detection system and method for glass panel
CN110910394A (en) Method for measuring resolution of image module
WO2018068414A1 (en) Detection method and device for resistor having color bands, and automated optical inspection system
CN116797915A (en) Detection method, detection device and storage medium
CN114244965B (en) High-precision exposure degree regulation and control method, system and medium applied to high-speed scanner
CN111242889A (en) Hot spot identification method and device for photovoltaic module
CN112801112B (en) Image binarization processing method, device, medium and equipment
CN112598632B (en) Appearance detection method and device for contact piece of crimping connector
CN114998346A (en) Waterproof cloth quality data processing and identifying method
CN107552415B (en) A kind of orange petal method for sorting applied to orange petal sorter
CN113780037A (en) Indicator light identification method and device, electronic equipment and computer readable storage medium
CN112183158A (en) Grain type identification method of grain cooking equipment and grain cooking equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination