CN107133961B - Method for detecting and processing circular area in video image - Google Patents

Method for detecting and processing circular area in video image Download PDF

Info

Publication number
CN107133961B
CN107133961B CN201710284255.9A CN201710284255A CN107133961B CN 107133961 B CN107133961 B CN 107133961B CN 201710284255 A CN201710284255 A CN 201710284255A CN 107133961 B CN107133961 B CN 107133961B
Authority
CN
China
Prior art keywords
matrix
variance
array
grouping
circle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710284255.9A
Other languages
Chinese (zh)
Other versions
CN107133961A (en
Inventor
田存伟
陶承阳
王明红
安学立
闫存莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaocheng University
Original Assignee
Liaocheng University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaocheng University filed Critical Liaocheng University
Priority to CN201710284255.9A priority Critical patent/CN107133961B/en
Publication of CN107133961A publication Critical patent/CN107133961A/en
Application granted granted Critical
Publication of CN107133961B publication Critical patent/CN107133961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a method for detecting and processing a circular area in a video image. The invention relates to a method for detecting and processing a circle in a video image, which selects a circle with the largest radius from a plurality of concentric circles as an interest area through a grouping method; selecting a circular area with the largest radius in the concentric circles, wherein the included characteristic information quantity is large, and subsequent identification is facilitated; the largest effective circle is reserved in the concentric circles, and the method has practical significance for selecting the identification area.

Description

Method for detecting and processing circular area in video image
Technical Field
The invention relates to a method for detecting and processing a circular area in a video image, belonging to the technical field of image processing.
Background
Image understanding and computer vision belong to an important branch in the field of artificial intelligence, and have wide application prospects. Circular images in life and production scenes are common, and detection and accurate positioning of circular areas become especially important. And the camera is used as a main tool for real-time image acquisition, and the acquired images have continuity and correlation. The method has great significance in joint processing and detection of continuous video image frames acquired by the camera.
The detection of the circular image can be applied to various fields, such as traffic sign recognition in a driving assistance device, which helps assist a driver to pay attention to traffic signs on both sides and above a road. Among traffic signs, the circular signs account for a large proportion. For another example, a circular stamp or the like in the identification file is detected in real time by video. In detecting and recognizing a circular image, a method of first processing a color image into a grayscale image and then detecting an edge of the grayscale image is generally used. Through careful observation, it is found that whether the processed edge image is a traffic sign or a circular stamp, the processed edge image often has a problem of concentric circles, or the image itself has concentric circles, or a plurality of concentric circle edges appear after edge detection. The detected circular area is generally used for subsequent identification, and the concentric circle problem is an important problem to be solved for detecting the circular area in the image.
In addition, in a continuous video, due to problems such as an unexpected pixel arrangement in a shooting environment or Hough conversion, erroneous detection is likely to occur when a circle is detected, and a region that is not a circle is regarded as a circle. It is also necessary to exclude circular recognition errors in successive video images.
The Hough transform detection algorithm is an algorithm commonly used for image processing and pattern recognition in the prior art. The Hough transform detection algorithm is popularized to a detection curve and is called Generalized Hough Transform (GHT). The generalized Hough transform is an effective method for detecting a circle, but the radius and the coordinates of the circle have three free parameters, so that the generalized Hough transform is adopted, the calculation amount is huge, and a large amount of memory is required. The probability Hough Transform (abbreviated as PPHT) can effectively overcome the above defects, and can be used for detecting a circle in a single image. But does not relate to the problems of circle detection, error circle exclusion and concentric circles in continuous video. Reference documents: the improved Hough transform circle detection method [ J ] is applied to a computer system, 2015,24(8): 197-.
Disclosure of Invention
In view of the deficiencies of the prior art, the present invention provides a method for detecting and processing a circular area in a video image.
The technical scheme of the invention is as follows:
a method of detecting a circular region in a processed video image, comprising the steps of:
a1, converting the collected continuous multi-frame video images into grayscale edge images, namely, Image1(1), Image1(2), … … and Image1 (p);
a2, finding circles in each gray edge Image1(1), Image1(2), … … and Image1(p), and outputting X-Y plane parameters of each circle detected in each Image; the X-Y plane parameters of the circle comprise an X coordinate of the circle center, a Y coordinate of the circle center and a radius r of the circle; storing the X-Y plane parameters of all circles in the multiple images into the same original matrix MT0, wherein the original matrix MT0 has 3 columns in total, i rows represent the total number of detected circles; the 1 st column of the original matrix MT0 is used for storing the x coordinate of the center of the detected circle, the 2 nd column of the original matrix MT0 is used for storing the y coordinate of the center of the detected circle, and the 3 rd column of the original matrix MT0 is used for storing the radius r of the detected circle;
a3, detecting the row number i of the original matrix MT0, if the row number i of the original matrix MT0 is less than m, judging that a corresponding circle in the original matrix MT0 is an error circle, emptying the original matrix MT0 and returning to the step A1; if the row number i in the original matrix MT0 is not less than m, performing step A4; when the number of detected circles is low, it is determined that no circle actually exists in the image, and the detected few circles are only error circles.
And A4, sequentially performing X grouping, Y grouping and R grouping processing on the original matrix MT0, processing the concentric circle problem, and outputting target circle parameters.
The edge of a circular image in a traffic sign image or a circular stamp image usually comprises a plurality of concentric circles, and the circle with the largest radius is selected from the plurality of concentric circles to serve as an interest area, so that the interest area is accurately positioned; the steps are processing the continuous p frames of images to detect the circle in the interest area;
preferably, the specific steps of X grouping are as follows:
x1: sorting each row in the original matrix MT0 as a whole in an ascending or descending order according to the size of the 1 st column element x; the sorted original matrix is recorded as MT 1;
let matrix MT1 be:
x2: performing X grouping on the matrix MT1 to obtain an X grouping sub-matrix, wherein the specific method is as follows:
calculating the arrays (x) in sequence1) Variance D ofx1Array (x)1,x2) Variance D ofx2Array (x)1,x2,x3) Variance D ofx3… …, array (x)1,x2,…,xk1) Variance D ofxk1Array (x)2,x3,…,xk1+1) Variance D ofx(k1+1)Array (x)3,x4,…,xk1+2) Variance D ofx(k1+2)… …, array (x)n-k1,xn-k1+1,…,xn-1,xn) Variance D ofxnUp to Dxn≥σx(ii) a k1 represents the maximum number of elements selected when calculating the variance; in order to ensure the effectiveness of the calculated variance, an upper limit k1 of the number of elements of the array for calculating the variance is defined;
x3: checking the current row number n, and if n is less than LX, deleting the first n-1 rows in the matrix MT1 to obtain a matrix MT 2; if n is larger than or equal to LX, extracting the first n-1 rows of the matrix MT1 to be used as the 1 st X grouping sub-matrix of the matrix MT1 and recording as MTX 1; deleting the first n-1 rows of the matrix MT1 as a matrix MT 2; wherein LX is the minimum row threshold value of the X grouping, if n is less than LX, the number of circles in the grouping matrix is considered to be too small, and the circles are error circles;
matrix MTX1 is:
matrix MT2 is:
x4: repeating the steps X2 and X3 on the matrix MT2 to obtain a2 nd X array sub-matrix MTX2, a3 rd X array sub-matrix MTX3, … … and a z th X grouping sub-matrix MTXz in sequence until all data in the matrix MT1 are subjected to X grouping processing;
the specific steps of the Y grouping are as follows:
y1: arranging each row in the X grouping submatrix into a whole in an ascending or descending order according to the size of the 2 nd row elements; the sorted matrix is marked as MTXA 1;
y2: carrying out Y grouping on each row in the matrix MTXA1 as a whole to obtain a Y grouping submatrix, wherein the specific method is as follows:
let the matrix MTXA1 be:
calculating the arrays (y) in turna) Variance D ofy1Array (y)a,yb) Variance D ofy2Array (y)a,yb,yc) Variance D ofy3Array (y)a,yb,…,yk2) Variance D ofyk2Array (y)b,yc,…,yk2+1) Variance D ofy(k2+1)Array (y)c,yd,…,yk2+2) Variance D ofy(k2+2)… …, array (y)n-k2,yn-k2+1,…,yn-1,yn) Variance D ofyn(ii) a Up to Dyn≥σyStopping variance calculation; k2 represents the maximum number of elements selected when calculating the variance; in order to ensure the effectiveness of the calculated variance, an upper limit k2 of the number of elements of the array for calculating the variance is defined;
y3: checking the current row number n, if n is less than LY, deleting the first n-1 rows of the matrix MTXX1 to generate a matrix MTXB 1; if n is larger than or equal to LY, extracting the first n-1 rows of the matrix MTXX1 to be used as the 1 st Y grouping submatrix of the matrix MTXA1 and to be recorded as MTX1Y 1; deleting the first n-1 rows of the matrix MTXX1 as a matrix MTXB 1; wherein LY is the Y group minimum row threshold;
the 1 st Y-grouping sub-matrix MTX1Y1 generated by matrix MTXX1 is:
y4: repeating the steps Y2 and Y3 on the matrix MTXB1 to obtain a2 nd Y-array submatrix MTX1Y2, a3 rd Y-array submatrix MTX1Y3, … … and a w-th Y-group submatrix MTX1Yw in sequence until all data in the matrix MTXX1 is subjected to Y-group processing;
y5: and sequentially executing steps Y1-Y4 on other sub-matrixes MTX2, MTX3, … … and MTXz obtained by grouping X, and finally obtaining X, Y two-step grouped sub-matrixes: MTX1Y1, MTX1Y2, …, MTX1 Yw; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, … and MTXzYv.
After X grouping and Y grouping, the circle data in each sub-matrix are all concentric circle data, the radiuses may be different, but the centers of the circles are consistent or close; and the subsequent R grouping aims to select a circle with a larger radius and an approximate circle from a plurality of concentric circles to obtain a radius average value, and the radius average value is used as a target circle for extraction.
The specific steps of R grouping are as follows:
r1: sorting the 1 st Y-grouped submatrices obtained by Y grouping in a descending order according to the size of the 3 rd column element r; the sorted matrix is recorded as MTX1Y 1R;
r2: taking each row in the matrix MTX1Y1R as a whole, grouping according to the size of the element R in the 3 rd column, only keeping the first R group, namely the group with the maximum R value, and deleting the rest data which represent the circle with smaller radius in the concentric circles; the specific method comprises the following steps:
let the sorted matrix MTX1Y1R be:
sequentially calculating the arrays (r)13) Variance D ofr1Array (r)13,r23) Variance D ofr2Array (r)13,r23,r33) Variance D ofr3Array (r)13,r23,…,r(k3)3) Variance D of(k3)3Array (r)23,r33,…,r(k3)3) Variance D of(k3+1)3… …, array (r)n-k3,rn-k3+1,…,rn-1,rn) Variance D ofrnUp to Drn≥σrStopping variance calculation; k3 represents the maximum number of elements selected when calculating the variance, and the upper limit k3 of the number of elements of the array for calculating the variance is defined to ensure the effectiveness of the calculated variance;
r3: checking the current row number n, if n is less than LR, deleting the previous n-1 rows of the matrix MTX1Y1R to generate a matrix MTX1Y1RB, and returning the MTX1Y1RB to the step R2 to replace the matrix MTX1Y1R for extraction again; if n is larger than or equal to LR, extracting the first n-1 rows of the matrix MTX1Y1R to be used as the 1 st R grouping submatrix of the matrix MTX1Y1R and marked as MTX1Y1R 1; no further processing is performed on other data in the matrix MTX1Y 1R; wherein LR is an R-group minimum row threshold;
when n ≧ LR, the 1 st R-subgroup submatrix MTX1Y1R1 generated by the matrix MTX1Y1R is:
r4: averaging each row of elements in the matrix MTX1Y1R1 to obtain parameters (avr _ X, avr _ Y and avr _ R) corresponding to the target circle;
r5: sub-matrixes MTX1Y2, … and MTX1Yw obtained by grouping the Y; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, …, MTXzYv perform the above steps R1-R4, respectively; the parameters of each target circle are determined separately.
More preferably, k1 ═ k2 ═ k3 ═ 4.
According to the preferred method of the invention, in the step a1, the collected continuous multi-frame images are converted into the gray scale edge Image1, and the Image edges are detected by a Canny edge detection operator to obtain the gray scale edge Image 1.
Preferably, in step a2, a PPHT algorithm is used to find a circle in each of the grayscale edge images Image1(1), Image1(2), … …, and Image1 (p).
According to a preferred embodiment of the present invention, p is 5.
When detecting a circle in an image using PPHT transform, false detection (i.e., accidentally detecting other areas in the image as circles) is likely to occur. Or the detected target circle region is easy to include concentric circles (the circle centers are the same or close, and the radiuses are different); in the X, Y, R grouping process, in addition to solving the problem of concentric circles, the error circles are also excluded. And when the number of times of detecting the same circle in the continuous multiframes is less, the circle is regarded as an effective target circle, and when the number of times of detecting the same circle is less, the circle is regarded as an error and is eliminated.
The invention has the beneficial effects that:
1. the invention relates to a method for detecting and processing a circle in a video image, which selects a circle with the largest radius from a plurality of concentric circles as an interest area through a grouping method; selecting a circular area with the largest radius in the concentric circles, wherein the included characteristic information quantity is large, and subsequent identification is facilitated; the largest effective circle is reserved in the concentric circles, and the circle has practical significance when used for selecting the identification area;
2. the method for detecting and processing the round shape in the video image through grouping processing effectively avoids error detection caused by unexpected pixel arrangement and noise influence after Hough conversion; the problem of selecting the effective area in the concentric circles is effectively solved.
Drawings
FIG. 1 is a flow chart of a method for detecting a circle in an image according to the present invention;
FIG. 2 is a flow chart of a method of X grouping, Y grouping and R grouping;
FIG. 3 is a pre-processed image of an X packet, a Y packet, and an R packet;
fig. 4 is a processed image of an X packet, a Y packet, and an R packet.
Detailed Description
The invention is further described below, but not limited thereto, with reference to the following examples and the accompanying drawings.
Example 1
As shown in fig. 1.
A method of detecting a circular region in a processed video image, comprising the steps of:
a1, converting the collected continuous multi-frame video images into grayscale edge images, namely, Image1(1), Image1(2), … … and Image1 (p);
a2, finding circles in each gray edge Image1(1), Image1(2), … … and Image1(p), and outputting X-Y plane parameters of each circle detected in each Image; the X-Y plane parameters of the circle comprise an X coordinate of the circle center, a Y coordinate of the circle center and a radius r of the circle; storing the X-Y plane parameters of all circles in the multiple images into the same original matrix MT0, wherein the original matrix MT0 has 3 columns in total, i rows represent the total number of detected circles; the 1 st column of the original matrix MT0 is used for storing the x coordinate of the center of the detected circle, the 2 nd column of the original matrix MT0 is used for storing the y coordinate of the center of the detected circle, and the 3 rd column of the original matrix MT0 is used for storing the radius r of the detected circle;
a3, detecting the row number i of the original matrix MT0, if the row number i of the original matrix MT0 is less than m, judging that a corresponding circle in the original matrix MT0 is an error circle, emptying the original matrix MT0 and returning to the step A1; if the row number i in the original matrix MT0 is not less than m, performing step A4; when the number of detected circles is low, it is determined that no circle actually exists in the image, and the detected few circles are only error circles.
And A4, sequentially performing X grouping, Y grouping and R grouping processing on the original matrix MT0, processing the concentric circle problem, and outputting target circle parameters.
The edge of a circular image in a traffic sign image or a circular stamp image usually comprises a plurality of concentric circles, and the circle with the largest radius is selected from the plurality of concentric circles to serve as an interest area, so that the interest area is accurately positioned; the steps are processing the continuous p frames of images to detect the circle in the interest area;
as can be seen from comparing fig. 3 and 4, the processing of the X group, the Y group, and the R group can greatly reduce the occurrence of false detection, and can locate the outermost circle of the concentric circles.
Example 2
As shown in fig. 2.
The method for detecting and processing a circular area in a video image according to embodiment 1, except that the specific steps of X grouping are as follows:
x1: sorting each row in the original matrix MT0 as a whole in an ascending or descending order according to the size of the 1 st column element x; the sorted original matrix is recorded as MT 1;
let matrix MT1 be:
x2: performing X grouping on the matrix MT1 to obtain an X grouping sub-matrix, wherein the specific method is as follows:
calculating the arrays (x) in sequence1) Variance D ofx1Array (x)1,x2) Variance D ofx2Array (x)1,x2,x3) Variance D ofx3… …, array (x)1,x2,…,xk1) Variance D ofxk1Array (x)2,x3,…,xk1+1) Variance D ofx(k1+1)Array (x)3,x4,…,xk1+2) Variance D ofx(k1+2)… …, array (x)n-k1,xn-k1+1,…,xn-1,xn) Variance D ofxnUp to Dxn≥σx(ii) a k1 represents the maximum number of elements selected when calculating the variance; in order to ensure the effectiveness of the calculated variance, an upper limit k1 of the number of elements of the array for calculating the variance is defined;
x3: checking the current row number n, and if n is less than LX, deleting the first n-1 rows in the matrix MT1 to obtain a matrix MT 2; if n is larger than or equal to LX, extracting the first n-1 rows of the matrix MT1 to be used as the 1 st X grouping sub-matrix of the matrix MT1 and recording as MTX 1; deleting the first n-1 rows of the matrix MT1 as a matrix MT 2; wherein LX is the minimum row threshold value of the X grouping, if n is less than LX, the number of circles in the grouping matrix is considered to be too small, and the circles are error circles;
matrix MTX1 is:
matrix MT2 is:
x4: repeating the steps X2 and X3 on the matrix MT2 to obtain a2 nd X array sub-matrix MTX2, a3 rd X array sub-matrix MTX3, … … and a z th X grouping sub-matrix MTXz in sequence until all data in the matrix MT1 are subjected to X grouping processing;
the specific steps of the Y grouping are as follows:
y1: arranging each row in the X grouping submatrix into a whole in an ascending or descending order according to the size of the 2 nd row elements; the sorted matrix is marked as MTXA 1;
y2: carrying out Y grouping on each row in the matrix MTXA1 as a whole to obtain a Y grouping submatrix, wherein the specific method is as follows:
let the matrix MTXA1 be:
calculating the arrays (y) in turna) Variance D ofy1Array (y)a,yb) Variance D ofy2Array (y)a,yb,yc) Variance D ofy3Array (y)a,yb,…,yk2) Variance D ofyk2Array (y)b,yc,…,yk2+1) Variance D ofy(k2+1)Array (y)c,yd,…,yk2+2) Variance D ofy(k2+2)… …, array (y)n-k2,yn-k2+1,…,yn-1,yn) Variance D ofyn(ii) a Up to Dyn≥σyStopping variance calculation; k2 represents the maximum number of elements selected when calculating the variance; in order to ensure the effectiveness of the calculated variance, an upper limit k2 of the number of elements of the array for calculating the variance is defined;
y3: checking the current row number n, if n is less than LY, deleting the first n-1 rows of the matrix MTXX1 to generate a matrix MTXB 1; if n is larger than or equal to LY, extracting the first n-1 rows of the matrix MTXX1 to be used as the 1 st Y grouping submatrix of the matrix MTXA1 and to be recorded as MTX1Y 1; deleting the first n-1 rows of the matrix MTXX1 as a matrix MTXB 1; wherein LY is the Y group minimum row threshold;
the 1 st Y-grouping sub-matrix MTX1Y1 generated by matrix MTXX1 is:
y4: repeating the steps Y2 and Y3 on the matrix MTXB1 to obtain a2 nd Y-array submatrix MTX1Y2, a3 rd Y-array submatrix MTX1Y3, … … and a w-th Y-group submatrix MTX1Yw in sequence until all data in the matrix MTXX1 is subjected to Y-group processing;
y5: and sequentially executing steps Y1-Y4 on other sub-matrixes MTX2, MTX3, … … and MTXz obtained by grouping X, and finally obtaining X, Y two-step grouped sub-matrixes: MTX1Y1, MTX1Y2, …, MTX1 Yw; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, … and MTXzYv.
After X grouping and Y grouping, the circle data in each sub-matrix are all concentric circle data, the radiuses may be different, but the centers of the circles are consistent or close; and the subsequent R grouping aims to select a circle with a larger radius and an approximate circle from a plurality of concentric circles to obtain a radius average value, and the radius average value is used as a target circle for extraction.
The specific steps of R grouping are as follows:
r1: sorting the 1 st Y-grouped submatrices obtained by Y grouping in a descending order according to the size of the 3 rd column element r; the sorted matrix is recorded as MTX1Y 1R;
r2: taking each row in the matrix MTX1Y1R as a whole, grouping according to the size of the element R in the 3 rd column, only keeping the first R group, namely the group with the maximum R value, and deleting the rest data which represent the circle with smaller radius in the concentric circles; the specific method comprises the following steps:
let the sorted matrix MTX1Y1R be:
sequentially calculating the arrays (r)13) Variance D ofr1Array (r)13,r23) Variance D ofr2Array (r)13,r23,r33) Variance D ofr3Array (r)13,r23,…,r(k3)3) Variance D of(k3)3Array (r)23,r33,…,r(k3)3) Variance D of(k3+1)3… …, array (r)n-k3,rn-k3+1,…,rn-1,rn) Variance D ofrnUp to Drn≥σrStopping variance calculation; k3 represents the maximum number of elements selected when calculating the variance, and the upper limit k3 of the number of elements of the array for calculating the variance is defined to ensure the effectiveness of the calculated variance;
r3: checking the current row number n, if n is less than LR, deleting the previous n-1 rows of the matrix MTX1Y1R to generate a matrix MTX1Y1RB, and returning the MTX1Y1RB to the step R2 to replace the matrix MTX1Y1R for extraction again; if n is larger than or equal to LR, extracting the first n-1 rows of the matrix MTX1Y1R to be used as the 1 st R grouping submatrix of the matrix MTX1Y1R and marked as MTX1Y1R 1; no further processing is performed on other data in the matrix MTX1Y 1R; wherein LR is an R-group minimum row threshold;
when n ≧ LR, the 1 st R-subgroup submatrix MTX1Y1R1 generated by the matrix MTX1Y1R is:
r4: averaging each row of elements in the matrix MTX1Y1R1 to obtain parameters (avr _ X, avr _ Y and avr _ R) corresponding to the target circle;
r5: sub-matrixes MTX1Y2, … and MTX1Yw obtained by grouping the Y; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, …, MTXzYv perform the above steps R1-R4, respectively; the parameters of each target circle are determined separately.
k1=k2=k3=4。
Example 3
The method for detecting and processing a circular area in a video Image according to embodiment 1 is different from that, in step a1, the captured continuous multi-frame images are converted into a grayscale edge Image1 by detecting the edges of the images with a Canny edge detector to obtain a grayscale edge Image 1.
The Canny edge detection operator is a common method for processing images in the prior art. According to the effectiveness of edge detection and the reliability of positioning, Canny gives three indexes for evaluating the performance of edge detection:
the detection result should contain true edges as much as possible and false edges as little as possible.
High accuracy, the detected edge should be on the true boundary.
And the single pixel is wide, has high selectivity and has unique response to each edge.
Canny proposes three optimization criteria for the first order differential filter h' (x) for edge detection, namely, a maximum signal-to-noise ratio criterion, an optimal zero-crossing point positioning criterion, and a single edge response criterion, for these three criteria. The method comprises the following specific steps:
(a) signal to noise ratio criterion
Wherein G (x) is an edge function; h (x) is the impulse response of a low-pass filter with bandwidth W; σ is the mean square error of gaussian noise. (b) Criteria for accuracy of positioning
L is the edge positioning accuracy, defined as follows:
wherein G '(x) and h' (x) are the first derivatives of G (x) and h (x); l is a measure of the accuracy of the edge positioning, with greater L providing greater positioning accuracy.
(c) Single edge response criterion
To ensure that there is only one response to a pair but an edge, the average distance of zero crossing points of the impulse response derivative of the detection operator should be such that:
wherein h "(x) is the second derivative of h (x); f' is the image after edge detection.
These three criteria are quantitative descriptions of the aforementioned edge detection indicators. For step-shaped edges, Canny derives an optimal edge detector shape that is similar to the first derivative of the gaussian function, and thus Canny edge detectors are constructed from the first derivative of the gaussian function. The gaussian function is circularly symmetric, so the Canny operator is symmetric in the direction of the edge and antisymmetric in the direction perpendicular to the edge.
Let the two-dimensional gaussian function be:where σ is a distribution parameter of a gaussian function, which can be used to control the degree of smoothing of the image. Optimal step edge detection operator to convolutionBased on a rim strength ofAnd the edge direction is
From the definition of gaussian function, the function is infinite-tail, and in practical application, the original template is generally truncated to a finite size N. Experiments in this patent show thatAnd better edge detection results can be obtained. Specific implementations of the Canny operator are given below.
Using the separability of the gaussian function, the two filtered convolution templates for ∑ G are decomposed into two one-dimensional row-column filters:
wherein the content of the first and second substances,
it can be seen that h1(x)=xh2(x),h1(y)=yh2(y), k is a constant.
Then, the two templates are respectively convoluted with f (x, y) to obtain
Order toThen a (i, j) reflects the edge strength and a (i, j) is the direction perpendicular to the edge.
According to the Canny definition, the center edge point is an operator GnMaximum value in the region of the convolution with the image f (x, y) in the edge gradient direction. Thus, it can be determined whether the intensity of each point is the maximum value of its domain in the gradient direction of the point to determine whether the point is an edge point. When one pixel satisfies the following three conditions, it is considered as an edge point of the image.
1) The edge strength of the point is greater than the edge strength of two adjacent pixel points along the gradient direction of the point;
2) the direction difference between the point and the adjacent two points in the gradient direction of the point is less than 45 degrees;
3) the maximum value of the edge intensity in the 3 × 3 region centered on the point is smaller than a certain threshold value.
Further, if 1) and 2) are satisfied at the same time, the neighboring pixels in the gradient direction are eliminated from the candidate edge points, and the condition 3) is that the threshold image composed of the region gradient maxima is matched with the edge points, which eliminates many false edge points.
The Canny edge detection operator has the following steps: step 1: filtering and denoising the image by using a Gaussian filter; step 2: calculating the magnitude and direction of the gradient by using the finite difference of the first-order partial derivatives; step 3: carrying out non-maximum suppression on the gradient amplitude; step 4: edges are detected and connected using a dual threshold algorithm.
Example 4
The method for detecting and processing a circular area in a video Image according to embodiment 1, except that in step a2, a PPHT algorithm is used to find a circle in each of the grayscale edge images Image1(1), Image1(2), … …, and Image1 (p).
Detecting the circle by adopting probabilistic Hough transform (PPHT), comprising the following steps: step 1: randomly obtaining foreground points on the edge of an image, and mapping the foreground points to a parameter space drawing curve; step 2: finding out a circle in an X-Y plane coordinate system corresponding to the intersection point when the intersection point in the parameter space reaches the minimum vote number; step 3: searching foreground points on the edge, connecting points (the distance between the points is smaller than a set threshold value) on a circle, storing parameters (circle center coordinates and radius) of the circle, and then deleting the circle from an input image to prevent repeated or invalid detection; step 4: if the radius of the circle is within the given range, storing the circle detection result into an array; step 5: repeating the 4 steps; step 6: and outputting the detected parameter data of all circles.

Claims (6)

1. A method for detecting and processing a circular region in a video image, comprising the steps of:
a1, converting the collected continuous multi-frame video images into grayscale edge images, namely, Image1(1), Image1(2), … … and Image1 (p);
a2, finding circles in each gray edge Image1(1), Image1(2), … … and Image1(p), and outputting X-Y plane parameters of each circle detected in each Image; the X-Y plane parameters of the circle comprise an X coordinate of the circle center, a Y coordinate of the circle center and a radius r of the circle; storing the X-Y plane parameters of all circles in the multiple images into the same original matrix MT0, wherein the original matrix MT0 has 3 columns in total, i rows represent the total number of detected circles; the 1 st column of the original matrix MT0 is used for storing the x coordinate of the center of the detected circle, the 2 nd column of the original matrix MT0 is used for storing the y coordinate of the center of the detected circle, and the 3 rd column of the original matrix MT0 is used for storing the radius r of the detected circle;
a3, detecting the row number i of the original matrix MT0, if the row number i of the original matrix MT0 is less than m, judging that a corresponding circle in the original matrix MT0 is an error circle, emptying the original matrix MT0 and returning to the step A1; if the row number i in the original matrix MT0 is not less than m, performing step A4;
and A4, sequentially performing X grouping, Y grouping and R grouping processing on the original matrix MT0, processing the concentric circle problem, and outputting target circle parameters.
2. The method of claim 1, wherein the X grouping comprises the following steps:
x1: sorting each row in the original matrix MT0 as a whole in an ascending or descending order according to the size of the 1 st column element x; the sorted original matrix is recorded as MT 1;
let matrix MT1 be:
x2: performing X grouping on the matrix MT1 to obtain an X grouping sub-matrix, wherein the specific method is as follows:
calculating the arrays (x) in sequence1) Variance D ofx1Array (x)1,x2) Variance D ofx2Array (x)1,x2,x3) Variance D ofx3… …, array (x)1,x2,…,xk1) Variance D ofxk1Array (x)2,x3,…,xk1+1) Variance D ofx(k1+1)Array (x)3,x4,…,xk1+2) Variance D ofx(k1+2)… …, array (x)n-k1,xn-k1+1,…,xn-1,xn) Variance D ofxnUp to Dxn≥σx(ii) a k1 represents the maximum number of elements selected when calculating the variance;
x3: checking the current row number n, and if n is less than LX, deleting the first n-1 rows in the matrix MT1 to obtain a matrix MT 2; if n is larger than or equal to LX, extracting the first n-1 rows of the matrix MT1 to be used as the 1 st X grouping sub-matrix of the matrix MT1 and recording as MTX 1; deleting the first n-1 rows of the matrix MT1 as a matrix MT 2; wherein LX is an X grouping minimum line threshold;
matrix MTX1 is:
matrix MT2 is:
x4: repeating the steps X2 and X3 on the matrix MT2 to obtain a2 nd X array sub-matrix MTX2, a3 rd X array sub-matrix MTX3, … … and a z th X grouping sub-matrix MTXz in sequence until all data in the matrix MT1 are subjected to X grouping processing;
the specific steps of the Y grouping are as follows:
y1: arranging each row in the X grouping submatrix into a whole in an ascending or descending order according to the size of the 2 nd row elements; the sorted matrix is marked as MTXA 1;
y2: carrying out Y grouping on each row in the matrix MTXA1 as a whole to obtain a Y grouping submatrix, wherein the specific method is as follows:
let the matrix MTXA1 be:
calculating the arrays (y) in turna) Variance D ofy1Array (y)a,yb) Variance D ofy2Array (y)a,yb,yc) Variance D ofy3Array (y)a,yb,…,yk2) Variance D ofyk2Array (y)b,yc,…,yk2+1) Variance D ofy(k2+1)Array (y)c,yd,…,yk2+2) Variance D ofy(k2+2)… …, array (y)n-k2,yn-k2+1,…,yn-1,yn) Variance D ofyn(ii) a Up to Dyn≥σyStopping variance calculation; k2 represents the maximum number of elements selected when calculating the variance;
y3: checking the current row number n, and deleting the first n-1 rows of the matrix MTXA1 to generate a matrix MTXB1 if n is less than LY; if n is larger than or equal to LY, extracting the first n-1 rows of the matrix MTXA1 to be used as the 1 st Y-grouped submatrix of the matrix MTXA1 and recording as MTX1Y 1; deleting the first n-1 rows of the matrix MTXA1 to obtain a matrix MTXB 1; wherein LY is the Y group minimum row threshold;
the 1 st Y-subgroup submatrix MTX1Y1 generated by the matrix MTXA1 is:
y4: repeating the steps Y2 and Y3 on the matrix MTXB1 to obtain a2 nd Y-array submatrix MTX1Y2, a3 rd Y-array submatrix MTX1Y3, … … and a w-th Y-group submatrix MTX1Yw in sequence until all data in the matrix MTXA1 is subjected to Y-group processing;
y5: and sequentially executing steps Y1-Y4 on other sub-matrixes MTX2, MTX3, … … and MTXz obtained by grouping X, and finally obtaining X, Y two-step grouped sub-matrixes: MTX1Y1, MTX1Y2, …, MTX1 Yw; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, …, MTXzYv;
the specific steps of R grouping are as follows:
r1: sorting the 1 st Y-grouped submatrices obtained by Y grouping in a descending order according to the size of the 3 rd column element r; the sorted matrix is recorded as MTX1Y 1R;
r2: taking each row in the matrix MTX1Y1R as a whole, grouping according to the size of the element R in the 3 rd column, only keeping the first R group, namely the group with the maximum R value, and deleting the rest data which represent the circle with smaller radius in the concentric circles; the specific method comprises the following steps:
let the sorted matrix MTX1Y1R be:
sequentially calculating the arrays (r)13) Variance D ofr1Array (r)13,r23) Variance D ofr2Array (r)13,r23,r33) Variance D ofr3Array (r)13,r23,…,r(k3)3) Variance D of(k3)3Array (r)23,r33,…,r(k3)3) Variance D of(k3+1)3… …, array (r)n-k3,rn-k3+1,…,rn-1,rn) Variance D ofrnUp to Drn≥σrStopping variance calculation; k3 represents the maximum number of elements selected when calculating the variance, and the upper limit k3 of the number of elements of the array for calculating the variance is defined to ensure the effectiveness of the calculated variance;
r3: checking the current row number n, if n is less than LR, deleting the previous n-1 rows of the matrix MTX1Y1R to generate a matrix MTX1Y1RB, and returning the MTX1Y1RB to the step R2 to replace the matrix MTX1Y1R for extraction again; if n is larger than or equal to LR, extracting the first n-1 rows of the matrix MTX1Y1R to be used as the 1 st R grouping submatrix of the matrix MTX1Y1R and marked as MTX1Y1R 1; no further processing is performed on other data in the matrix MTX1Y 1R; wherein LR is an R-group minimum row threshold;
when n ≧ LR, the 1 st R-subgroup submatrix MTX1Y1R1 generated by the matrix MTX1Y1R is:
r4: averaging each row of elements in the matrix MTX1Y1R1 to obtain parameters (avr _ X, avr _ Y and avr _ R) corresponding to the target circle;
r5: sub-matrixes MTX1Y2, … and MTX1Yw obtained by grouping the Y; MTX2Y1, MTX2Y2, …, MTX2 Yu; … …, respectively; MTXzY1, MTXzY2, …, MTXzYv perform the above steps R1-R4, respectively; the parameters of each target circle are determined separately.
3. The method of claim 2, wherein k 1-k 2-k 3-4.
4. The method according to claim 1, wherein in step a1, the captured continuous multi-frame images are converted into grayscale edge Image1 by detecting the edges of the images using a Canny edge detector to obtain grayscale edge Image 1.
5. The method of claim 1, wherein in step A2, a PPHT algorithm is used to find a circle in each of the grayscale edge images Image1(1), Image1(2), … …, and Image1 (p).
6. The method of claim 1, wherein p is 5.
CN201710284255.9A 2017-04-26 2017-04-26 Method for detecting and processing circular area in video image Active CN107133961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710284255.9A CN107133961B (en) 2017-04-26 2017-04-26 Method for detecting and processing circular area in video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710284255.9A CN107133961B (en) 2017-04-26 2017-04-26 Method for detecting and processing circular area in video image

Publications (2)

Publication Number Publication Date
CN107133961A CN107133961A (en) 2017-09-05
CN107133961B true CN107133961B (en) 2019-12-27

Family

ID=59716153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710284255.9A Active CN107133961B (en) 2017-04-26 2017-04-26 Method for detecting and processing circular area in video image

Country Status (1)

Country Link
CN (1) CN107133961B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651910B (en) * 2019-10-11 2023-12-26 新疆三维智达网络科技有限公司 Method and system for generating superimposed anti-counterfeiting seal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971087A (en) * 2013-07-12 2014-08-06 湖南纽思曼导航定位科技有限公司 Method and device for searching and recognizing traffic signs in real time

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302252A1 (en) * 2014-04-16 2015-10-22 Lucas A. Herrera Authentication method using multi-factor eye gaze

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971087A (en) * 2013-07-12 2014-08-06 湖南纽思曼导航定位科技有限公司 Method and device for searching and recognizing traffic signs in real time

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"改进的Hough变换检测圆方法";陈小艳;《计算机系统应用》;20151231;第24卷(第8期);全文 *

Also Published As

Publication number Publication date
CN107133961A (en) 2017-09-05

Similar Documents

Publication Publication Date Title
CN107527352B (en) Remote sensing ship target contour segmentation and detection method based on deep learning FCN network
CN108304883B (en) SAR image matching method based on improved SIFT
CN107993488B (en) Parking space identification method, system and medium based on fisheye camera
CN108229342B (en) Automatic sea surface ship target detection method
CN109558908B (en) Method for determining optimal edge of given area
CN109948393B (en) Method and device for positioning one-dimensional bar code
CN110580481B (en) Light field image key position detection method based on EPI
CN111860570B (en) Cloud particle image extraction and classification method
CN107564006B (en) Circular target detection method utilizing Hough transformation
CN113658192A (en) Multi-target pedestrian track acquisition method, system, device and medium
CN114495098B (en) Diaxing algae cell statistical method and system based on microscope image
CN116129102A (en) Water meter identification method and related device based on improved image processing algorithm
CN107133961B (en) Method for detecting and processing circular area in video image
CN106897683B (en) Ground object detection method and system of remote sensing image
JP2011257244A (en) Body shape detection device and method
CN112950594A (en) Method and device for detecting surface defects of product and storage medium
CN116229419B (en) Pedestrian detection method and device
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
CN112560839A (en) Automatic identification method and system for reading of pointer instrument
CN111445510A (en) Method for detecting straight line in image
CN112734745B (en) Unmanned aerial vehicle thermal infrared image heating pipeline leakage detection method fusing GIS data
CN113326749A (en) Target detection method and device, storage medium and electronic equipment
Gandikota et al. Pixel Noise Localization Algorithm for Indian Satellite Data Quality Control: A Novel Approach
CN112710632A (en) Method and system for detecting high and low refractive indexes of glass beads
CN110059706B (en) Detection method for single straight line in pepper-salt-rich noise environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant