CN111627069A - Dot detection method, terminal and computer-readable storage medium - Google Patents

Dot detection method, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN111627069A
CN111627069A CN202010354854.5A CN202010354854A CN111627069A CN 111627069 A CN111627069 A CN 111627069A CN 202010354854 A CN202010354854 A CN 202010354854A CN 111627069 A CN111627069 A CN 111627069A
Authority
CN
China
Prior art keywords
gray
pixels
gray value
values
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010354854.5A
Other languages
Chinese (zh)
Other versions
CN111627069B (en
Inventor
李楚翘
邓亮
陈先开
冯良炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cosmosvision Intelligent Technology Co ltd
Original Assignee
Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cosmosvision Intelligent Technology Co ltd filed Critical Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority to CN202010354854.5A priority Critical patent/CN111627069B/en
Publication of CN111627069A publication Critical patent/CN111627069A/en
Application granted granted Critical
Publication of CN111627069B publication Critical patent/CN111627069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dot detection method, a terminal and a computer readable storage medium, wherein the method comprises the following steps: shooting an image, wherein the image is provided with dots and a background, and the gray value of pixels in the dots is higher than that of pixels in the background; detecting the gray value of a pixel on a shot image; dividing mutually communicated pixels in the image into the same area; setting the area to be an ellipse, and establishing a gray model; and substituting the pixels of the region into the gray model, calculating the model parameter values, and calibrating the dots. According to the technical scheme of the invention, firstly, the image is subjected to binarization processing, a dot area is preliminarily determined, a gray model is established at the boundary of the target based on the gray value of the background and the gray difference between the target dot and the background, and the parameters of each dot in the calibration plate can be calculated by utilizing the model, so that the dot calibration is completed, and the accuracy meets the camera calibration requirement.

Description

Dot detection method, terminal and computer-readable storage medium
Technical Field
The invention relates to the field of computer vision, in particular to a dot detection method, a terminal and a computer-readable storage medium.
Background
In the camera calibration process, the dots on the calibration plate are generally detected by using a least square method, and the method needs to extract the positions of scattered points at the edges of the dots and then perform graph fitting by using the least square method. The scattered points extracted by the method have larger error relative to the actual dot outline, and the fitting result has larger error.
Therefore, a new technical scheme for realizing dot calibration is needed, dots are calibrated accurately, and the camera calibration requirements are met.
Disclosure of Invention
The invention mainly aims to provide a dot detection method, a dot detection terminal and a computer-readable storage medium, aiming at accurately calibrating dots and meeting the calibration requirement of a camera.
In order to achieve the above object, the present invention provides a dot detection method, including: shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background; detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image; dividing pixels with the gray value of 255, which are mutually communicated, in the black-and-white binary image into the same area; setting the region to be elliptical and establishing a gray model
Figure BDA0002473099560000011
Wherein
Figure BDA0002473099560000012
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray value of the pixels in the region, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray value of the background, cst is the difference between the gray value of the circle point and the gray value of the background, amp (d (x, y))=exp(-d(x,y)ce) ∈ (0,1) representing the magnification of the pixel at different distances, ce being the magnification factor, substituting the pixel in the region into the gray model to calculate the values of cst, bgd, cx, cy, theta, rx, ry, ce, and calibrating the dots according to the values of cst, bgd, cx, cy, theta, rx, ry, ce.
In order to achieve the above object, the present invention provides a terminal, which includes a processor, a memory, and a communication bus; the communication bus is used for realizing connection communication between the processor and the memory; the processor is configured to execute a program stored in the memory to perform the steps of: shooting an image, wherein the image is provided with a dot and a background, and the gray value of a pixel in the dot is higher than that of a pixel in the background; detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold value, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold value to obtain a black-and-white binary image; dividing pixels with the gray value of 255, which are mutually communicated, in the black-white binary image into the same area; setting the region to be elliptical and establishing a gray model
Figure BDA0002473099560000021
Wherein
Figure BDA0002473099560000022
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray scale value of the pixel in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray scale value of the background, cst is the difference between the gray scale value of the circle point and the gray scale value of the background, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) representing the magnification of the pixel at different distances, ce being the magnification factor, substituting the pixel in the region into the gray model to calculate the values of cst, bgd, cx, cy, theta, rx, ry, ce, and calibrating the dots according to the values of cst, bgd, cx, cy, theta, rx, ry, ce.
To achieve the above object, the present invention provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the aforementioned method.
According to the technical scheme, the dot detection method, the terminal and the computer readable storage medium have the advantages that:
according to the technical scheme, firstly, binarization processing is carried out on an image, a dot area is preliminarily determined, a gray model is established at the target boundary based on a background gray value and the gray difference between a target dot and a background, parameters of all dots in a calibration plate can be calculated by using the model, so that dot calibration is completed, and the accuracy meets the camera calibration requirement.
Drawings
FIG. 1 is a flow diagram of a dot detection method according to one embodiment of the invention;
FIG. 2 is an original image according to one embodiment of the present invention;
FIG. 3 is a binarized image according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of a pixel according to one embodiment of the invention;
FIG. 5 is a schematic diagram of region division according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of point-to-ellipse distances according to one embodiment of the present invention;
FIG. 7 is a schematic diagram of gray scale model control coefficients according to one embodiment of the present invention;
FIG. 8 is a schematic diagram of a gray scale model according to one embodiment of the invention;
FIG. 9 is a schematic view of dot calibration according to one embodiment of the present invention;
FIG. 10 is a flow diagram of a dot detection method according to one embodiment of the invention;
FIG. 11 is a flow diagram of searching for connected pixels according to one embodiment of the invention;
FIG. 12 is a schematic diagram of searching for connected pixels according to one embodiment of the invention;
FIG. 13 is a flow diagram of grayscale model optimization according to one embodiment of the present invention;
FIG. 14 is a flow diagram of a gradient descent process according to one embodiment of the invention;
fig. 15 is a block diagram of a terminal according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, an embodiment of the present invention provides a dot calibration detection method, including:
step S110, shooting an image, wherein the image is provided with a dot and a background, and the gray value of the pixel in the dot is higher than that of the pixel in the background.
The technical scheme of the embodiment is suitable for calibrating the camera and is also suitable for other detection fields. In the present embodiment, the camera calibration is taken as an example for explanation, and as shown in fig. 2, the captured image includes calibration dots and a background.
Step S120, detecting the gray value of a pixel on the shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold, so as to obtain a black-and-white binary image.
As shown in fig. 3, the captured image is subjected to binarization processing, and the gradation value of the pixel on the picture is set to 0 or 255. If the current pixel pxiGra _ scale of gray scale valueiIf the gray value is less than the threshold th, the gray value of the current pixel is set to 0, i.e. gra _ scalei0. Otherwise, if the current gray value is larger than the threshold th, gra _ scalei255. Preferably, a specific threshold th-155 may be selected, where the specific threshold isWhen th is 155, the black-and-white binary image can be rendered well.
And step S130, dividing the mutually communicated pixels with the gray value of 255 in the black-and-white binary image into the same area.
In this embodiment, and as shown in FIG. 4 in particular, when the image is partitioned, there will be a large number of white pixels (gra _ scale)i255) into one region run, the entire image is divided into i regions, forming regions as shown in fig. 5.
Step S140, setting the area to be an ellipse and establishing a gray model
Figure BDA0002473099560000041
Wherein
Figure BDA0002473099560000042
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray scale value of the pixel in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray scale value of the background, cst is the difference between the gray scale value of the circle point and the gray scale value of the background, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor.
In this embodiment, the distance d (x, y) from the point to the ellipse needs to be defined first. Equation for known ellipse E
Figure BDA0002473099560000043
Wherein the major axis rx, the minor axis ry are ellipses, and the rotation angle θ of the major axis is shown in FIG. 6, defining a region runiThe distance d (x, y) from the inner point p (x, y) to the ellipse E is
Figure BDA0002473099560000044
The distance between points on the ellipse E is d (x, y) ═ 1, and the points within the ellipse satisfy d (x, y)<1, the points outside the ellipse satisfy d (x, y)>1。
The gray scale model continues to be defined. Assume the calibration plate background is black and the target dots are whiteColor, then background gray value bgd, gray value difference between the target circle point and the background is cst, and the gray model defined at the target boundary is
Figure BDA0002473099560000051
Left side I of the model equatione(x, y) can be regarded as a region runiThe actual gray value of the inner pixel, right of the equation, is a function of the estimated gray value, which contains coefficient background gray value bgd, gray value difference cst, center of circle (rx, ry), major and minor axes (cx, cy), rotation angle θ, and magnification control coefficient ce.
d (x, y) ═ 1 is the target boundary, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) is the magnification of pixels at different distances, ce is a parameter that controls the shape of amp (d (x, y)), a large control factor ce makes the magnification function steeper near the object boundary (d (x, y) ═ 1.) A model of gray scale estimation of white objects, where the gray scale value is estimated as the background value plus an appropriate scaling of the contrast, the scaling factor being controlled by the distance d (x, y) from the position of the current point to the object boundary (ellipse), d (x, y)>The scale factor rapidly drops to 0 at 1 (estimated gray values tend towards background values), d (x, y)<The scale factor rises rapidly to 1 at 1 (estimated gray values tend to bgd + cst).
And step S150, substituting the pixels of the area into the gray model, and calculating the values of cst, bgd, cx, cy, theta, rx, ry and ce.
Step S160, calibrating the dots according to the values of cst, bgd, cx, cy, θ, rx, ry, ce, as shown in fig. 9.
By adopting the technical scheme of the embodiment and the identification model based on the gray level difference, the dots on the calibration plate can be accurately identified. The method is different from the traditional method of firstly screening the edge points and then fitting the edge points, firstly, the pixel connected domain is divided according to the gray value, and then, the gray model is used for detecting each connected domain. The method of the invention can more accurately detect the circle points on the calibration plate.
As shown in fig. 10, an embodiment of the present invention provides a dot detection method, including:
step S1010, shooting an image, wherein the image is provided with a dot and a background, and the gray value of the pixel in the dot is higher than the gray value of the pixel in the background.
Step S1020, detecting a gray value of a pixel on the captured image, setting the gray value of the pixel to 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to 255 when the gray value of the pixel exceeds the preset threshold, thereby obtaining a black-and-white binary image.
Step S1030, scanning the pixels on the black-and-white binary image line by line, dividing the pixels with the continuous gray value of 255 in each line into a uniform region, and dividing the pixels with the gray value of 255 communicated between two adjacent lines into the same region.
As shown in fig. 11 and 12, in searching for connected pixels, first, pixels in the first row are scanned, and the start point star and the end point end of consecutive white pixels are recorded. Such as the first row of the first blob run1(2,7), the first row of the second blob run2(10, 11). Next, the second row is searched, if pixels in the second row are connected with the cliques of the first row, for example, the second row (7,7) is connected with the clique run1(2,7) of the first row, the name of the clique run1 is assigned to (7,7), similarly run2 is assigned to (10,10) of the second row, the clique (7,8) of the third row is connected with both cliques of the previous row, the smaller clique name run1 is assigned to the clique of the third row, the run2 is marked as an equivalent clique of run1, the clique "I" of the fourth row is not connected with the clique of the previous row, and the new clique run3 is given to the second row. And scanning and dividing the white pixels on the whole image in the same way to obtain the cluster run _ i on the image and the pixel coordinates in each cluster.
Step S1040, setting the area to be ellipse, and establishing a gray model
Figure BDA0002473099560000061
Wherein
Figure BDA0002473099560000062
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray value of the pixels in the region, rx, ry are the ellipseMajor and minor axes of a circle, cx and cy representing central coordinates of the ellipse, θ representing a rotation angle of the major axis, bgd representing a gray level of the background, cst representing a difference between the gray level of the circle and the gray level of the background, amp (d (x, y)) -exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor.
Step S1050, performing gradient descent optimization on the gray model to obtain
Figure BDA0002473099560000063
Is provided with
Figure BDA0002473099560000064
Partial differentiation is carried out on the cst, bgd, cx, cy and theta to enable the partial differentiation to be equal to 0, and the values of the cst, bgd, cx, cy and theta are obtained and used as the basis for calculating the values of the cst, bgd, cx, cy, theta, rx, ry and ce.
In step S1060, partial differentiation is performed on rx, ry, and ce to obtain gradient descent directions, which are used as the basis for calculating the values of rx, ry, and ce.
As shown in fig. 13, an optimization model is first defined, and values of parameters cst, bgd, cx, cy, and θ are obtained. The parameters to be confirmed in the current gray estimation model comprise an ellipse central point position (cx, cy), an ellipse long and short axis (rx, ry), an ellipse rotation angle theta, a gray model shape control ce, a local gray value difference cst and a background gray value bgd. The optimized objective function is
Figure BDA0002473099560000071
Where Φ is [ cst, bgd, cx, cy, rx, ry, θ, ce]TThe Levenberg-Marquardt method is used for gradient descent optimization,
Figure BDA0002473099560000072
updating variables using linear search
Figure BDA0002473099560000073
Figure BDA0002473099560000074
Gradient calculation:
Figure BDA0002473099560000075
is provided with
Figure BDA0002473099560000076
The parameters cst, bgd, cx, cy, θ are partially differentiated to be equal to 0 to obtain the values of the parameters, and the parameters rx, ry, ce are partially differentiated to obtain the gradient descending direction.
Figure BDA0002473099560000077
Figure BDA0002473099560000078
Figure BDA0002473099560000079
Figure BDA00024730995600000710
Figure BDA00024730995600000711
Figure BDA00024730995600000712
Figure BDA0002473099560000081
Figure BDA0002473099560000082
And S1070, setting the gradient descending step length c, reducing the step length by half in each iteration, repeating the iteration until the difference between the two iterations is smaller than a preset threshold value, and taking the values of rx, ry and ce which enable the difference between the two iterations to be minimum.
In this embodiment, as shown in fig. 14, for the non-negative attributes of the major and minor axis radius parameters (rx, ry) and the grayscale model shape control parameter ce, the step length c of gradient descent is set, and the step length is reduced by half each iteration, that is, c is equal to c/2, where the initial value of step length is equal to c is equal to 1. And repeating the iteration step of phi ← phi-c f phi until the difference between two iterations is smaller than a threshold th _ cos, and at the moment, the value of rx, ry, ce with the minimum difference between the two iterations is the final value of rx, ry, ce. Here, the error threshold value th _ cos is 0.000001.
And step S1080, calibrating the circle points according to the values of cst, bgd, cx, cy, theta, rx, ry and ce.
In this embodiment, as shown in fig. 13, dots on the calibration plate can be accurately identified using a recognition model based on a gray difference. The method is different from the traditional method of firstly screening the edge points and then fitting the edge points, firstly, the pixel connected domain is divided according to the gray value, and then, the gray model is used for detecting each connected domain. The method of the invention can more accurately detect the dots on the calibration plate.
As shown in fig. 15, a terminal is provided in one embodiment of the invention and includes a processor 1510, a memory 1520, a communication bus 1530; the communication bus 1530 is used to enable connective communication between the processor 1510 and the memory 1520; the processor 1510 is configured to execute programs stored in the memory 1520 to implement the following steps:
shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background.
The technical scheme of the embodiment is suitable for calibrating the camera and is also suitable for other detection fields. In the present embodiment, the camera calibration is taken as an example for explanation, and as shown in fig. 2, the captured image includes calibration dots and a background.
Detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image.
As shown in fig. 3, the captured image is subjected to binarization processing, and the gradation value of the pixel on the picture is set to 0 or 255. If the current pixel pxiGra _ scale of gray scale valueiIf the gray value is less than the threshold th, the gray value of the current pixel is set to 0, i.e. gra _ scalei0. Otherwise, if the current gray value is larger than the threshold th, gra _ scalei=255。
And dividing pixels with the gray value of 255, which are mutually communicated in the black-white binary image, into the same region. In this embodiment, and as shown in FIG. 4 in particular, when an image is partitioned, there will be a large number of white pixels (gra _ scale)i255) into one region run, the entire image is divided into i regions, forming the regions shown in fig. 5.
Setting the region to be elliptical and establishing a gray model
Figure BDA0002473099560000091
Wherein
Figure BDA0002473099560000092
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray scale value of the pixel in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray scale value of the background, cst is the difference between the gray scale value of the circle point and the gray scale value of the background, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor.
In this embodiment, the distance d (x, y) from the point to the ellipse needs to be defined first. Equation for known ellipse E
Figure BDA0002473099560000093
Wherein the major axis rx, the minor axis ry are ellipses and the major axis is rotatedTurning the angle θ, as shown in FIG. 6, define the region runiThe distance d (x, y) from the inner point p (x, y) to the ellipse E is
Figure BDA0002473099560000094
The distance between points on the ellipse E is d (x, y) ═ 1, and the points within the ellipse satisfy d (x, y)<1, the points outside the ellipse satisfy d (x, y)>1。
The gray scale model continues to be defined. Assuming the calibration plate background is black and the target dots are white, the background gray level is bgd, the gray level difference between the target dots and the background is cst, and the gray level model defined at the target boundary is
Figure BDA0002473099560000101
Left side I of the model equatione(x, y) can be regarded as a region runiThe actual gray value of the inner pixel, right of the equation, is a function of the estimated gray value, which contains coefficient background gray value bgd, gray value difference cst, center of circle (rx, ry), major and minor axes (cx, cy), rotation angle θ, and magnification control coefficient ce.
d (x, y) ═ 1 is the target boundary, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) is the magnification of pixels at different distances, ce is a parameter that controls the shape of amp (d (x, y)), a large control factor ce makes the magnification function steeper near the object boundary (d (x, y) ═ 1.) A model of gray scale estimation of white objects, where the gray scale value is estimated as the background value plus an appropriate scaling of the contrast, the scaling factor being controlled by the distance d (x, y) from the position of the current point to the object boundary (ellipse), d (x, y)>The scale factor rapidly drops to 0 at 1 (estimated gray values tend towards background values), d (x, y)<The scale factor rises rapidly to 1 at 1 (estimated gray values tend to bgd + cst).
And substituting the pixels of the area into the gray model to calculate the values of cst, bgd, cx, cy, theta, rx, ry and ce.
The dots are marked according to the values of cst, bgd, cx, cy, θ, rx, ry, ce, as shown in fig. 9.
By adopting the technical scheme of the embodiment and the identification model based on the gray level difference, the dots on the calibration plate can be accurately identified. The method is different from the traditional method of firstly screening the edge points and then fitting the edge points, firstly, the pixel connected domain is divided according to the gray value, and then, the gray model is used for detecting each connected domain. The method of the invention can more accurately detect the circle points on the calibration plate.
As shown in fig. 15, a terminal is provided in one embodiment of the invention and includes a processor 1510, a memory 1520, a communication bus 1530; the communication bus 1530 is used to enable connective communication between the processor 1510 and the memory 1520; the processor 1510 is configured to execute programs stored in the memory 1520 to implement the following steps:
shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background.
Detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image.
And scanning pixels on the black-white binary image line by line, dividing pixels with 255 continuous gray values of each line into a uniform area, and dividing pixels with 255 continuous gray values between two adjacent lines into the same area.
As shown in fig. 11 and 12, in searching for connected pixels, first, pixels in the first row are scanned, and the start point star and the end point end of consecutive white pixels are recorded. Such as the first row of the first blob run1(2,7), the first row of the second blob run2(10, 11). Next, the second row is searched, if pixels in the second row are connected with the cliques of the first row, for example, the second row (7,7) is connected with the clique run1(2,7) of the first row, the name of the clique run1 is assigned to (7,7), similarly run2 is assigned to (10,10) of the second row, the clique (7,8) of the third row is connected with both cliques of the previous row, the smaller clique name run1 is assigned to the clique of the third row, the run2 is marked as an equivalent clique of run1, the clique "I" of the fourth row is not connected with the clique of the previous row, and the new clique run3 is given to the second row. And scanning and dividing the white pixels on the whole image in the same way to obtain the cluster run _ i on the image and the pixel coordinates in each cluster.
Setting the region to be elliptical and establishing a gray model
Figure BDA0002473099560000111
Wherein
Figure BDA0002473099560000112
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray scale value of the pixel in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray scale value of the background, cst is the difference between the gray scale value of the circle point and the gray scale value of the background, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor.
Performing gradient descent optimization on the gray scale model to obtain
Figure BDA0002473099560000113
Is provided with
Figure BDA0002473099560000114
Partial differentiation is carried out on the cst, bgd, cx, cy and theta to enable the partial differentiation to be equal to 0, and the values of the cst, bgd, cx, cy and theta are obtained and used as the basis for calculating the values of the cst, bgd, cx, cy, theta, rx, ry and ce.
And partial differentiation is carried out on rx, ry and ce to obtain the gradient descending direction, and the gradient descending direction is used as a basis for calculating the values of rx, ry and ce.
As shown in fig. 13, an optimization model is first defined, and values of parameters cst, bgd, cx, cy, and θ are obtained. The parameters to be confirmed in the current gray estimation model comprise an ellipse central point position (cx, cy), an ellipse long and short axis (rx, ry), an ellipse rotation angle theta, a gray model shape control ce, a local gray value difference cst and a background gray value bgd. The optimized objective function is
Figure BDA0002473099560000121
Where Φ is [ cst, bgd, cx, cy, rx, ry, θ, ce]TThe Levenberg-Marquardt method is used for gradient descent optimization,
Figure BDA0002473099560000122
updating variables using linear search
Figure BDA0002473099560000123
Figure BDA0002473099560000124
Gradient calculation:
Figure BDA0002473099560000125
is provided with
Figure BDA0002473099560000126
The parameters cst, bgd, cx, cy, θ are partially differentiated to be equal to 0 to obtain the values of the parameters, and the parameters rx, ry, ce are partially differentiated to obtain the gradient descending direction.
Figure BDA0002473099560000127
Figure BDA0002473099560000128
Figure BDA0002473099560000129
Figure BDA00024730995600001210
Figure BDA00024730995600001211
Figure BDA00024730995600001212
Figure BDA00024730995600001213
Figure BDA00024730995600001214
Setting the step length c of gradient reduction, reducing half step length for each iteration, repeating the iteration until the difference between the two iterations is smaller than a preset threshold value, and taking the value of rx, ry and ce which enables the difference between the two iterations to be minimum.
In this embodiment, as shown in fig. 14, for the non-negative attributes of the major and minor axis radius parameters (rx, ry) and the grayscale model shape control parameter ce, the step length c of gradient descent is set, and the step length is reduced by half each iteration, that is, c is equal to c/2, where the initial value of step length is equal to c is equal to 1. And repeating the iteration step of phi ← phi-c f phi until the difference between two iterations is smaller than a threshold th _ cos, and at the moment, the value of rx, ry, ce with the minimum difference between the two iterations is the final value of rx, ry, ce. Here, the error threshold value th _ cos is 0.000001.
And calibrating the dots according to the values of cst, bgd, cx, cy, theta, rx, ry and ce.
In this embodiment, as shown in fig. 13, dots on the calibration plate can be accurately identified using a recognition model based on a gray difference. The method is different from the traditional method of firstly screening the edge points and then fitting the edge points, firstly, the pixel connected domain is divided according to the gray value, and then, the gray model is used for detecting each connected domain. The method of the invention can more accurately detect the dots on the calibration plate.
One embodiment of the present invention provides a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of:
shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background.
The technical scheme of the embodiment is suitable for calibrating the camera and is also suitable for other detection fields. In the present embodiment, the camera calibration is taken as an example for explanation, and as shown in fig. 2, the captured image includes calibration dots and a background.
Detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image.
As shown in fig. 3, the captured image is subjected to binarization processing, and the gradation value of the pixel on the picture is set to 0 or 255. If the current pixel pxiGra _ scale of gray scale valueiIf the gray value is less than the threshold th, the gray value of the current pixel is set to 0, i.e. gra _ scalei0. Otherwise, if the current gray value is larger than the threshold th, gra _ scalei=255。
And dividing pixels with the gray value of 255, which are mutually communicated in the black-white binary image, into the same region. In this embodiment, and as shown in FIG. 4 in particular, when an image is partitioned, there will be a large number of white pixels (gra _ scale)i255) into one region run, the entire image is divided into i regions, forming the regions shown in fig. 5.
Setting the region to be elliptical and establishing a gray model
Figure BDA0002473099560000141
Wherein
Figure BDA0002473099560000142
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray value of the pixels in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, theta is the rotation angle of the major axis, bgd is the gray value of the background, cst is the gray value of the circle point and the backgroundDifference in gray-scale value, amp (d (x, y)) -exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor.
In this embodiment, the distance d (x, y) from the point to the ellipse needs to be defined first. Equation for known ellipse E
Figure BDA0002473099560000143
Wherein the major axis rx, the minor axis ry are ellipses, and the rotation angle θ of the major axis is shown in FIG. 6, defining a region runiThe distance d (x, y) from the inner point p (x, y) to the ellipse E is
Figure BDA0002473099560000144
The distance between points on the ellipse E is d (x, y) ═ 1, and the points within the ellipse satisfy d (x, y)<1, the points outside the ellipse satisfy d (x, y)>1。
The gray scale model continues to be defined. Assuming the calibration plate background is black and the target dots are white, the background gray level is bgd, the gray level difference between the target dots and the background is cst, and the gray level model defined at the target boundary is
Figure BDA0002473099560000145
Left side I of the model equatione(x, y) can be regarded as a region runiThe actual gray value of the inner pixel, right of the equation, is a function of the estimated gray value, which contains coefficient background gray value bgd, gray value difference cst, center of circle (rx, ry), major and minor axes (cx, cy), rotation angle θ, and magnification control coefficient ce.
d (x, y) ═ 1 is the target boundary, amp (d (x, y)) ═ exp (-d (x, y)ce) ∈ (0,1) is the magnification of pixels at different distances, ce is a parameter that controls the shape of amp (d (x, y)), a large control factor ce makes the magnification function steeper near the object boundary (d (x, y) ═ 1.) A model of gray scale estimation of white objects, where the gray scale value is estimated as the background value plus an appropriate scaling of the contrast, the scaling factor being controlled by the distance d (x, y) from the position of the current point to the object boundary (ellipse), d (x, y)>The scale factor rapidly drops to 0 at 1 (estimated gray values tend towards background values), d (x, y)<The scale factor rises rapidly to 1 at 1 (estimated gray values tend to bgd + cst).
And substituting the pixels of the area into the gray model to calculate the values of cst, bgd, cx, cy, theta, rx, ry and ce.
The dots are marked according to the values of cst, bgd, cx, cy, θ, rx, ry, ce, as shown in fig. 9.
By adopting the technical scheme of the embodiment and the identification model based on the gray level difference, the dots on the calibration plate can be accurately identified. The method is different from the traditional method of firstly screening the edge points and then fitting the edge points, firstly, the pixel connected domain is divided according to the gray value, and then, the gray model is used for detecting each connected domain. The method of the invention can more accurately detect the circle points on the calibration plate.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A dot detection method, comprising:
shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background;
detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image;
dividing pixels with the gray value of 255, which are mutually communicated in the black-white binary image, into the same region;
setting the region to be elliptical and establishing a gray model
Figure FDA0002473099550000011
Wherein
Figure FDA0002473099550000012
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray value of the pixels in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray value of the background, cst is the difference between the gray values of the dots and the background, amp (d (x, y)) -exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification systemCounting;
substituting the pixels of the area into the gray model, and calculating the values of cst, bgd, cx, cy, theta, rx, ry and ce;
and calibrating the dots according to the values of cst, bgd, cx, cy, theta, rx, ry and ce.
2. The method according to claim 1, wherein the dividing pixels with a connected gray value of 255 in the black-and-white binary image into the same region comprises:
and scanning pixels on the black-white binary image line by line, dividing pixels with 255 continuous gray values of each line into a uniform area, and dividing pixels with 255 continuous gray values of two adjacent lines into the same area.
3. The method of claim 1, wherein substituting pixels of the region into the gray model to calculate values of rx, ry, ce, cst, bgd, cx, cy, θ comprises:
performing gradient descent optimization on the gray scale model to obtain
Figure FDA0002473099550000021
Is provided with
Figure FDA0002473099550000022
Partial differentiation is carried out on cst, bgd, cx, cy and theta to make the partial differentiation equal to 0, and the values of cst, bgd, cx, cy and theta are obtained and used as the basis for calculating the values of rx, ry and ce.
4. The method of claim 3, wherein the substituting the pixels of the region into the gray model to calculate the values of rx, ry, ce further comprises:
and partial differentiation is carried out on rx, ry and ce to obtain the gradient descending direction, and the gradient descending direction is used as a basis for calculating the values of rx, ry and ce.
5. The method of claim 4, wherein the substituting the pixels of the region into the gray model to calculate the values of rx, ry, ce further comprises:
setting the step length c of gradient reduction, reducing half step length for each iteration, repeating the iteration until the difference between the two iterations is smaller than a preset threshold value, and taking the value of rx, ry and ce which enables the difference between the two iterations to be minimum.
6. A terminal, characterized in that the terminal comprises a processor, a memory, a communication bus; the communication bus is used for realizing connection communication between the processor and the memory; the processor is configured to execute a program stored in the memory to perform the steps of:
shooting an image, wherein the image is provided with a dot and a background, and the gray value of pixels in the dot is higher than that of pixels in the background;
detecting the gray value of a pixel on a shot image, setting the gray value of the pixel to be 0 when the gray value of the pixel does not exceed a preset threshold, and setting the gray value of the pixel to be 255 when the gray value of the pixel exceeds the preset threshold to obtain a black-and-white binary image;
dividing pixels with the gray value of 255, which are mutually communicated in the black-white binary image, into the same region;
setting the region to be elliptical and establishing a gray model
Figure FDA0002473099550000023
Wherein
Figure FDA0002473099550000024
Representing the distance of the pixels in said area to said ellipse, Ie(x, y) is the actual gray value of the pixels in the area, rx, ry are the major axis and the minor axis of the ellipse, cx, cy represent the center coordinates of the ellipse, θ is the rotation angle of the major axis, bgd is the gray value of the background, cst is the difference between the gray values of the dots and the background, amp (d (x, y)) -exp (-d (x, y)ce) ∈ (0,1) denotes the magnification of the pixel at different distances, ce being the magnification factor;
substituting the pixels of the area into the gray model, and calculating the values of cst, bgd, cx, cy, theta, rx, ry and ce;
and calibrating the dots according to the values of cst, bgd, cx, cy, theta, rx, ry and ce.
7. The terminal of claim 6, wherein the processor is further configured to execute a program stored in the memory to perform the steps of:
and scanning pixels on the black-white binary image line by line, dividing pixels with 255 continuous gray values of each line into a uniform area, and dividing pixels with 255 continuous gray values of two adjacent lines into the same area.
8. The terminal of claim 6, wherein the processor is further configured to execute a program stored in the memory to perform the steps of:
performing gradient descent optimization on the gray scale model to obtain
Figure FDA0002473099550000031
Is provided with
Figure FDA0002473099550000032
Partial differentiation is carried out on cst, bgd, cx, cy and theta to make the partial differentiation equal to 0, and the values of cst, bgd, cx, cy and theta are obtained and used as the basis for calculating the values of rx, ry and ce.
9. The terminal of claim 8, wherein the processor is further configured to execute a program stored in the memory to perform the steps of:
and partial differentiation is carried out on rx, ry and ce to obtain the gradient descending direction, and the gradient descending direction is used as a basis for calculating the values of rx, ry and ce.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the method of any one of claims 1 to 5.
CN202010354854.5A 2020-04-29 2020-04-29 Dot detection method, terminal and computer readable storage medium Active CN111627069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010354854.5A CN111627069B (en) 2020-04-29 2020-04-29 Dot detection method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010354854.5A CN111627069B (en) 2020-04-29 2020-04-29 Dot detection method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111627069A true CN111627069A (en) 2020-09-04
CN111627069B CN111627069B (en) 2023-05-05

Family

ID=72271669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010354854.5A Active CN111627069B (en) 2020-04-29 2020-04-29 Dot detection method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111627069B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102355A (en) * 2020-09-25 2020-12-18 江苏瑞尔医疗科技有限公司 Low-contrast resolution identification method, equipment, storage medium and system for flat panel detector
CN113822950A (en) * 2021-11-22 2021-12-21 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
US20190347824A1 (en) * 2018-05-14 2019-11-14 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for positioning pupil, storage medium, electronic device
CN110634146A (en) * 2019-08-30 2019-12-31 广东奥普特科技股份有限公司 Circle center sub-pixel precision positioning method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
US20190347824A1 (en) * 2018-05-14 2019-11-14 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for positioning pupil, storage medium, electronic device
CN110634146A (en) * 2019-08-30 2019-12-31 广东奥普特科技股份有限公司 Circle center sub-pixel precision positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢超;谢明红;: "应用局部自适应阈值方法检测圆形标志点" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102355A (en) * 2020-09-25 2020-12-18 江苏瑞尔医疗科技有限公司 Low-contrast resolution identification method, equipment, storage medium and system for flat panel detector
CN113822950A (en) * 2021-11-22 2021-12-21 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate
CN113822950B (en) * 2021-11-22 2022-02-25 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate

Also Published As

Publication number Publication date
CN111627069B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN108108746B (en) License plate character recognition method based on Caffe deep learning framework
US8391602B2 (en) Character recognition
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN114926839B (en) Image identification method based on RPA and AI and electronic equipment
CN109738450B (en) Method and device for detecting notebook keyboard
CN111627069A (en) Dot detection method, terminal and computer-readable storage medium
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN111985465A (en) Text recognition method, device, equipment and storage medium
EP2782065B1 (en) Image-processing device removing encircling lines for identifying sub-regions of image
CN111444773B (en) Image-based multi-target segmentation identification method and system
CN112560847A (en) Image text region positioning method and device, storage medium and electronic equipment
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
JP3251840B2 (en) Image recognition device
CN109635798B (en) Information extraction method and device
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering
CN114417906A (en) Method, device, equipment and storage medium for identifying microscopic image identification
CN110378922B (en) Smooth image generation method and device based on adaptive threshold segmentation algorithm
Chen et al. Hidden-Markov-model-based segmentation confidence applied to container code character extraction
CN117291926B (en) Character defect detection method, apparatus, and computer-readable storage medium
CN116468611B (en) Image stitching method, device, equipment and storage medium
CN114973292B (en) Character recognition method and system based on irregular surface and storage medium
CN112257705B (en) Method for identifying text content of picture
CN112926424B (en) Face shielding recognition method, device, readable medium and equipment
CN113160284B (en) Guidance space-consistent photovoltaic image registration method based on local similar structure constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant