CN114757880A - Automatic detection method for clock travel accuracy based on machine vision - Google Patents

Automatic detection method for clock travel accuracy based on machine vision Download PDF

Info

Publication number
CN114757880A
CN114757880A CN202210241879.3A CN202210241879A CN114757880A CN 114757880 A CN114757880 A CN 114757880A CN 202210241879 A CN202210241879 A CN 202210241879A CN 114757880 A CN114757880 A CN 114757880A
Authority
CN
China
Prior art keywords
dial
image
clock
travel
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210241879.3A
Other languages
Chinese (zh)
Inventor
蒋维
李华
付西红
韩军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Reida Precision Co ltd
Fujian Institute of Research on the Structure of Matter of CAS
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
Fujian Reida Precision Co ltd
Fujian Institute of Research on the Structure of Matter of CAS
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Reida Precision Co ltd, Fujian Institute of Research on the Structure of Matter of CAS, XiAn Institute of Optics and Precision Mechanics of CAS filed Critical Fujian Reida Precision Co ltd
Priority to CN202210241879.3A priority Critical patent/CN114757880A/en
Publication of CN114757880A publication Critical patent/CN114757880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an automatic detection method for clock travel accuracy based on machine vision, belonging to the technical field of clock timing; the method specifically comprises the following steps: image acquisition, image preprocessing, clock image processing, clock travel time online detection and detection result output; the clock image processing specifically comprises the following steps: the method comprises the steps of dial image morphological processing, clock pointer straight line detection, dial scale line fitting and pointer rotation center fitting; the method comprises the steps of obtaining a high-resolution image based on machine vision, identifying and extracting real-time contour features of a dial and a pointer, realizing online quality automatic detection of clock travel accuracy through a rapid image processing algorithm, achieving the industrial target of clock actual travel daily difference measuring equipment from scratch by enabling the measuring accuracy to be better than +/-1 s and the measuring speed to be better than 0.8s/pcs, solving the problems of low manual measuring efficiency, high labor intensity, high misjudgment rate and the like, automatically and efficiently separating qualified products from unqualified products, and achieving high identification accuracy and high measuring speed.

Description

Automatic detection method for clock travel accuracy based on machine vision
Technical Field
The invention relates to the technical field of clock timing, in particular to an automatic detection method for clock travel accuracy based on machine vision.
Background
The time difference is one of the most main indexes representing the travel quality of the clock, two modes of instantaneous time difference and actual travel time difference are generally adopted for evaluation, the instantaneous time difference can be measured on a time meter (time difference tester), the purchased movement quality is mainly detected, the measurement efficiency is high, the travel time is inaccurate due to the influence of a plurality of process links such as subsequent pointer assembly, and the like, so that the method cannot completely represent the actual travel time error of the clock; at present, practical daily difference measurement basically has no applicable measuring equipment, and some enterprises with capability adopt manual visual comparison with standard time (24h) and carry out spot inspection according to a certain proportion, so that the method is time-consuming and labor-consuming, has low measuring precision and is easily influenced by artificial subjectivity. Therefore, the applicant provides an automatic detection method for the clock travel accuracy based on machine vision, and the method solves the problems of time and labor consumption, low measurement precision and susceptibility to artificial subjective influence.
Disclosure of Invention
Technical scheme (I)
The invention is realized by the following technical scheme: the automatic detection method for the clock travel accuracy based on the machine vision specifically comprises the following steps:
image acquisition and image preprocessing;
processing a clock image;
online detection of travel time of the clock;
Outputting a detection result;
the clock image processing specifically comprises the following steps:
carrying out morphological processing on the dial image;
detecting the straight line of the clock pointer;
fitting dial scale marks;
and fitting the rotating center of the pointer.
As a further description of the above solution, the dial image morphological processing specifically includes the following steps:
corroding the dial plate image;
expanding the dial image;
opening and closing the dial image;
dial plate image connected domain mark and region extraction;
and thinning the dial plate image.
As a further explanation of the above solution, the clock hand straight line detection specifically includes:
fitting a Hough transform straight line;
and fitting a least square straight line.
As a further explanation of the above scheme, the fitting of the dial scale marks is specific:
defining moments of the image;
digital features of the image are defined.
As a further explanation of the above scheme, the pointer rotation center is fitted specifically with:
determining the gravity center position of the dial;
and determining the radius R and the coordinates of the circle center of the dial.
As a further explanation of the above scheme, the online travel time detection of the timepiece specifically includes the following steps:
establishing a camera imaging model;
calibrating a dial plate plane;
reconstructing a dial plane;
Identifying the spatial position dial plate reading;
and (4) correcting dial reading identification.
As a further description of the above scheme, the calibrating dial plate specifically includes the following steps:
establishing a dial plate plane space model;
calculating dial plate plane parameters;
and outputting the world coordinates of the dial plane.
As a further description of the above solution, the reconstructing of the dial plane specifically includes the following steps:
solving a dial plane;
extracting a dial characteristic space position;
establishing a local world coordinate system;
and solving the dial features of the local coordinate system.
As a further explanation of the above scheme, the reading of the dial plate for identifying the spatial position is identified by an angle method or a distance method.
As a further explanation of the above scheme, the correction of the dial reading identification specifically includes the following steps:
correcting the position error of the pointer;
and correcting the angle error of the pointer.
(III) advantageous effects
Compared with the prior art, the invention has the following beneficial effects:
the invention has the advantages that: the method comprises the steps of obtaining a high-resolution image based on machine vision, identifying and extracting real-time contour features of a dial and a pointer, realizing online quality automatic detection of clock travel accuracy through a rapid image processing algorithm, achieving the industrial target of clock actual travel daily difference measuring equipment from scratch by enabling the measuring accuracy to be better than +/-1 s and the measuring speed to be better than 0.8s/pcs, solving the problems of low manual measuring efficiency, high labor intensity, high misjudgment rate and the like, automatically and efficiently separating qualified products from unqualified products, and achieving high identification accuracy and high measuring speed.
Drawings
Other characteristics, objects and advantages of novelty will become more apparent upon reading the detailed description of a non-limiting embodiment thereof, with reference to the attached drawings, wherein:
FIG. 1 is a schematic diagram of an opening operation process of the opening and closing process of the dial image in the embodiment;
FIGS. 2a and 2b are schematic diagrams illustrating different space duality principles in the embodiment of the present invention;
FIG. 3 is a schematic view of Hough transform polar coordinates according to an embodiment of the present invention;
FIG. 4 is a least squares and RANSAC line in an embodiment of the present invention; FIG. 4a is a schematic diagram of a data set containing a lot of abnormal data, and FIG. 4b is a straight line calculated by RANSAC;
FIG. 5 is a schematic diagram of an imaging model of a camera according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating four coordinate system transformations of an imaging model of a camera according to an embodiment of the present invention;
FIG. 7 is a schematic view of a dial plate plane calibration model in an embodiment of the present invention;
FIG. 8 is a schematic view of a model of a dial plate recognition system in an embodiment of the present invention;
FIG. 9 is a schematic diagram of a coordinate transformation procedure in an embodiment of the present invention;
fig. 10 is a schematic view of an automatic detection process of the travel accuracy of the timepiece according to the embodiment of the invention.
Detailed Description
Example 1
The automatic detection method for the clock travel accuracy based on the machine vision specifically comprises the following steps:
Image acquisition and image preprocessing; the method specifically comprises image denoising processing, enhancement processing and the like, and the steps belong to one of the prior art, and are not described in detail herein;
processing a clock image; the method specifically comprises the following steps:
1) dial plate image corrosion treatment and image corrosion operation: when the structuring element S is moved over a given target image X, S [ X ] has the following three possible states relative to X:
a、
Figure RE-GDA0003686681430000041
at this time S [ x ]]And X is maximum;
b、
Figure RE-GDA0003686681430000042
at this time S [ x ]]Is not related to X;
c、S[x]n is X and
Figure RE-GDA0003686681430000043
are all not empty.
In the above state (c), Sx and X are the largest in correlation, so all the structure elements in the image point X are the largest correlated point set Sx of the image, called S vs. X erosion, and is noted as X S. Define the erosion operation in a set-wise manner as:
Figure RE-GDA0003686681430000044
the method can eliminate the boundary point information of the target image by using the corrosion operation of the image, remove the object smaller than the structural element by selecting the proper structural element, reduce one or more pixels on the periphery of the outline of the object, and eliminate the object with different sizes by the selection of the structural element. And more useless background information can be reduced by applying the corrosion algorithm, and the operation speed is improved. Particularly, it is suitable for eliminating useless background information such as decorative marks on the dial.
2) The expansion of the dial image is regarded as the dual operation of erosion operation, and the expansion operation is expanded for every point X in the image X and recorded as
Figure RE-GDA0003686681430000051
Defined by way of a set as:
Figure RE-GDA0003686681430000052
the algorithm is substantially equivalent to that the structural elements move on the original image, and pixel points are added according to the structural elements at positions which do not contain information in the image, so that the image is enlarged, namely the structural elements are combined with the original image. The dilation operation can connect the background information near the edge of the image or two parts with small intervals, so that the dilation operation can be used for filling and repairing the image hole after image processing.
3) The dial image opening and closing processing is that B is a structural unit, A is a target image, the opening operation is to perform corrosion firstly and then perform expansion operation, the operation is called as the opening operation of B to A, the closing operation process of B to A is opposite to the opening operation, and the specific formula of the opening and closing operation is as follows:
Figure RE-GDA0003686681430000053
Figure RE-GDA0003686681430000054
referring to fig. 1, the open operation is performed as shown in fig. 1, and the close operation is performed in reverse of the open operation. As can be seen from FIG. 1, after the structural unit B operates on the image A, small objects are separated from the fine points in the image A, and at the same time, the boundary of the image is effectively smoothed, and the area size of the target image A can be ensured to be basically unchanged. The on operation is performed on the image to smooth the boundary contour of the image and effectively suppress noise in the image a, i.e., remove small, discrete, or uneven portions of the boundary in a. Therefore, the open operation algorithm of the image has two effects of smoothing and denoising.
4) The method comprises the steps of extracting connected domain marks and regions of a dial image, extracting pointers and scribed lines of the dial is important for identifying dial reading, and determining the region where the pointers are located before extracting the pointers and the scribed lines. The determination of connected regions is a more common algorithm in visual measurement. Generally the state of the connection of the regions determines how the components are distinguished.
i. The labeling method of 4 neighborhoods and 8 neighborhoods is the most common method. The 4 neighborhood connections are in 4 neighborhoods connected by one point; the 8 neighborhood connections are four diagonal points of each of two points and 4 adjacent eight direction connections;
ii. 0 pixel and 1 pixel: the binary image is an image obtained by threshold segmentation, and pixels having a value of 0 are the background and pixels having a value of 1 are the target.
When the number of connected parts is calculated by marking the connected blocks, the discrimination of the connected parts is very important. Marking connected regions means distinguishing connected portions of the same pixel, and marking the connected portions of each pixel with different codes.
In fact, finding valuable connected components in the binarized image is the ultimate goal of image processing. Let Y be the connected component in set A, and then let a point p in Y be known. Then, all points in Y can be calculated according to the following equation:
Figure RE-GDA0003686681430000061
Where X0P, B is a symmetric structural element. If X isk=Xk-1I.e. the algorithm is converged at this time, the calculation is done at the kth step of the iteration. Let Y be Xk,XkThe union of a and a includes the filled area and the edge of the area.
Boundary extraction: the boundary of set A is denoted as β (A), which can be obtained by first corroding A with B, and then subtracting the corrosion from A. Namely, it is
β(A)=A-(AΘB)
Where B is a suitable structural element.
For each dial and pointer image, the extraction method adopted is different due to the difference of the geometrical characteristics and the pointer position. The connected regions of different sizes and shapes are expressed as follows:
i. perimeter: the sum of the distances between boundary pixels, by searching for the boundary pixel values in the image area, the value of the distance between pixels adjacent to the boundary is 1, and the pixel distance of a diagonal line to the boundary is 2;
ii. Area: the sum of all pixel points in the connected domain;
iii, density: c ═ length (circumference)2Area;
iiii, circularity: r ═ 4 pi (area)/(perimeter)2
The purpose of template matching is to remove background information of the dial plate, so as to obtain a dial plate template. The project mainly extracts a target processing area through the perimeter characteristics of a dial indicator, and sets the pixel values of other areas to be 0 by searching and reserving the largest connected body area of the perimeter in the image, thereby obtaining a template image. The implementation operation is mainly completed by the following 3 steps:
The method comprises the following steps: negating the dial plate binary image obtained by the OTSU algorithm;
step two: searching a connecting body with the maximum perimeter by searching boundary pixel values in an image area;
step three: and reserving the image of the maximum perimeter connected region, setting the pixel brightness value of other regions to be 0, calculating the area of each connected body in the region, and removing the region with smaller area.
And extracting the pointer region according to the density, wherein the pointer region is the longest region in the whole image, so that the extraction of the pointer region is realized by obtaining the density of each region, sequencing, only reserving the region with the highest density and setting the pixel brightness value of other regions to be 0.
5) And (3) dial plate image thinning operation, namely, in a target image processing link, continuously performing opening operation, corrosion operation and closing operation on an object in the binarized image, and repeating the operation process until the object in the image keeps a pixel unit, namely skeleton operation, wherein the specific formula is as follows:
Figure RE-GDA0003686681430000071
Figure RE-GDA0003686681430000072
in the above formula, s (a) is a skeleton operation on the image a, which is a union of the image element skeleton subsets, where B represents a structural element, and (a Θ kB) represents that a is successively eroded k times, i.e., (a Θ kB) ((a Θ B).
And K is the number of iterations before image a is eroded and converted into an empty set, i.e.:
K=max{k|AΘkB)≠φ};
the reconstruction of a is done by dilation of the subset, i.e.:
Figure RE-GDA0003686681430000073
after the steps are completed, the dial image morphological processing is finished, then the clock pointer straight line detection is carried out, and the method specifically comprises the following steps:
1) the Hough transformation straight line fitting is carried out, and the operational idea of utilizing Hough (Hough) algorithm to carry out straight line detection is to utilize the dual characteristics of image space and parameter space points and lines, namely, coordinate of an image in the image space is transformed into the parameter space, a parameter space peak point is searched, and the longest straight line in the image space is found by utilizing duality.
The Hough transformation straight line detection is completed by using the duality principle that points and straight lines are in different spaces. It can be described specifically as: in the image space XY, the equation of a straight line for all the passing points (x, y) can be expressed as: y equals px + q. Writing the equation of a line to the form, the points on the line in the parameter space can be represented by: q ═ px + y; duality principle of different space points and lines: the collinear points in the image space correspond to straight lines intersecting one point in the parameter space; and a line intersecting at a point in the parameter space also corresponds to a point in the image space that is collinear. Referring to FIG. 2, there is illustrated the principle of different spatial duality, passing points (x) in image space i,yi) And (x)j,yj) The equation of (a) is that y is px + q, and the point (x) is crossedi,yi) And (x)j,yj) Are respectively rewritten as q ═ pxi+yiAnd yj=pxj+ q denotes two straight lines in the parameter space, which intersect at a point (p, q) in the parameter space, which corresponds to the straight line y in the image space being px + q.
According to the principle of point-line duality, the extraction of the midpoint in the parameter space is used for replacing the extraction of the straight line in the image space by the Hough algorithm, so that only the intersection point of the straight lines in the parameter space needs to be detected. Rewriting y ═ px + q into the parametric representation as follows: referring to fig. 3, for ρ ═ x cos θ + y sin θ, we do not understand that ρ is the perpendicular distance between the origin and the straight line, and θ is the angle formed by the straight line and the x axis, and we can understand that ρ ═ x cos θ + y sin θ is the duality between the point of the image space and the parameter space and the sine curve, and the process of obtaining the image straight line by the Hough transform algorithm is substantially the process of converting the intersection point of the rectangular coordinate system and the sine curve in the polar coordinate system.
2) The least square straight line is fitted with the straight line,
i. least squares straight line fitting
Suppose that in a certain image space XY, a point (x) in the pointer region is obtained by thinningi,yi) And the shape of the connection of i is approximate to a straight line, and the straight line parameter is obtained by minimizing the sum of squared residuals.
Let the equation of a straight line be expressed as: p (x) kx + b
In the formula: k is the slope of the line and b is the intercept of the line.
The residual sum of squares Q (k, b) is:
Figure RE-GDA0003686681430000091
the minimum point of Q (k, b) is found by the point where the directional gradient of the residual Q (k, b) is equal to zero, i.e.:
Figure RE-GDA0003686681430000092
according to the above formula, the straight line parameters k and b are obtained as follows:
Figure RE-GDA0003686681430000093
ii. RANSAC improved least squares straight line fitting
The least square algorithm can calculate and optimize straight line parameters according to a given straight line model equation, and can meet the point data in all the regions as much as possible. When there is a large noise influence in the image, the least squares algorithm will not find a straight line because the algorithm itself has a disadvantage of being susceptible to noise. Referring to fig. 4, it can be seen from fig. 4 that the straight line adapted to the correct data cannot be accurately found by the least square method. Because the least squares algorithm tries to adapt to all points including outlier data points. On the contrary, RANSAC can obtain a straight line model calculated only by correct data, the probability is high enough, points far away from a straight line are removed by setting a threshold value, and then straight line fitting is carried out by least square, so that the straight line fitting precision and accuracy are improved.
The RANSAC algorithm is based on the principle that a sample data set is continuously and randomly acquired from an established parameter model in an iterative mode to obtain more correct samples, better model parameters are supported, and the extracted sample model is verified by extracting the residual set of the model in the sample set. In the extraction of k times, through continuous iteration and verification, when the extracted sample set is close to or inosculated with the correct model, the probability is the maximum. And (4) regarding the sample at the moment as the model sample closest to the ideal model, and verifying and supporting the accuracy of the model parameter solution obtained by sampling by extracting a sample residual set.
The sampling times k are obtained as follows:
when the model estimated by RANSAC obtains parameters, the probability that sample data points randomly extracted from a data sample set in the iterative calculation process are all correct data is assumed to be p, namely p represents the probability that the result calculated by the algorithm is an effective result. And in the model estimation process, the minimum data number of each extraction is n, and the sampling times are k.
When selecting n points for model estimation, wnRepresenting the probability that all the n points are extracted as correct data; 1-wnAnd representing the probability of abnormal data existing in n points in the extraction model, namely, at least one point position abnormal data. This indicates that the selected estimation model is not an ideal model. (1-w)n)kRepresents the probability that the algorithm is still unable to select the ideal estimation model when the number of iterations is k, i.e.: 1-p ═ 1-wn)k. Given n and p, the number of samples needed at different w can be determined as follows: k is log (1-p)/log (1-w)n). Define the standard deviation of k as:
Figure RE-GDA0003686681430000101
the RANSAC algorithm picks n points at each iteration and computes a possible model through the n points. Since these n points must be unique. The standard deviation is added to k, making the obtained parameters more trustworthy.
Assuming that the equation of the pointer straight line is y ═ kx + b, the RANSAC least square algorithm is implemented as follows:
the method comprises the following steps: and calculating the proportion P of the noise data in the extracted pointer area, and calculating the sampling times k required when the extracted data set at the confidence probability P of the sample at least comprises a group of sampling data and all the data are on a straight line y which is kx + b.
Step two: at each sampling, randomly extracting 2 data from the data to form a minimum data subset, and calculating a straight line parameter k corresponding to the data subseti,biAnd using all data points in the region to check and support k and b, and calculating each data point in the region to a straight line y which is kix+biOf the distance of (c).
Step three: and selecting an optimal pointer straight line model, removing points far away from the straight line by setting a threshold value to be 0.2 pixel, obtaining parameters k and b of the straight line, and obtaining points on the straight line and points around the straight line within the threshold value range.
Step four: and fitting a pointer straight line to points on and around the straight line by a least square straight line fitting algorithm.
By combining the experimental chart analysis, as Hough transformation parameters are difficult to select and determine, discretization parameters can cause inaccurate obtained straight line results, Hough transformation is global transformation based on images, the calculation amount is large, the consumed memory is large, and the end point position of a straight line cannot be obtained only by algorithm fitting; RANSAC can estimate model parameters in a robust mode, high-precision parameters can be estimated by an algorithm from a data set containing a large amount of abnormal data, and a pointer straight line can be accurately calculated by the RANSAC least square algorithm in complex instrument information. Experiments prove that the RANSAC least square algorithm can ensure the recognition efficiency and simultaneously achieve higher precision.
After the steps are completed, the pointer of the clock is detected, and dial scale mark fitting is carried out;
the fitting of dial scale marks specifically comprises:
1) defining moments of the image; moment of the image: given a two-dimensional continuous function f (i, j), its pq-order moment is defined as:
Figure RE-GDA0003686681430000111
according to the principles of uniqueness in papulis (Papoulis), a two-dimensional image can be characterized by moments: if f (i, j) is piecewise continuous, with non-zero values in only a limited portion of the xy plane, then all of the order moments exist, and the sequence of moments { pq }, M also uniquely determines f (i, j). For a binarized image f (i, j), i, j is 0, 1, 2. Therefore, the pq order moment can be defined as:
Mpq=∑∑f(i,j)ipjp,pq=0,1,2...
2) defining the digital characteristics of the image, and defining the digital characteristics of the image according to moments:
i. the center of gravity is defined as:
Figure RE-GDA0003686681430000121
ii. The moment of center of gravity is defined as:
Figure RE-GDA0003686681430000122
the above calculation formula firstly obtains 0-th moment M of each pixel when the algorithm is implemented00First moment M in the X direction01First moment M in the Y direction10. Let the gray value of a pixel point P (I, j) on the image be I (I, j). The center of gravity (x) of the meter image is extracted0,y0),I(x,y)kIs f (x, y)kBinary image of (2), center of gravity (x)0,y0) Can be expressed as:
Figure RE-GDA0003686681430000123
in the formula, N represents the number of pixels in the region.
And after the steps are completed, completing the fitting of the dial scale mark, and entering the pointer rotation center for fitting.
The pointer rotation center fitting specifically comprises the following steps:
1) determining the gravity center position of the dial; and determining the gravity center position of the dial by performing least square circle fitting on the gravity centers of the scales, wherein the least square circle is the circle with the smallest sum of squares of the distances from the gravity center point of each scale to the circle center. Let the equation of the circle be: x is the number of2+y2-ax-by+d=0。
2) Determining the radius R and the coordinates of the centre of the circle of the dial, having x2+y2The coordinate of the center of the circle is (a/2, b/2) when the value of ax-by + d is 0, and the radius of the circle is (d/b)
Figure RE-GDA0003686681430000124
The goal of the fit is to make all scale gravity points (x)i,yi) The sum of the squares of the distances to the fitted circles is minimal. The sum of the squares of the distances is expressed as:
Figure RE-GDA0003686681430000125
i.e. byAnd solving the unknown quantity by using a linear optimization method to minimize epsilon, thereby obtaining the radius R and the center coordinates of the circle.
After the steps are completed, the fitting of the pointer rotation center is completed, and the processing of the clock image is completed;
the online detection of the travel time of the clock specifically comprises the following steps:
1) establishing a camera imaging model;
in order to describe the actual process of obtaining a dial image by a camera, the present project establishes the following four coordinate systems to describe the camera imaging process, the four coordinate systems being shown in fig. 5;
(ii) a world coordinate system
World coordinate system OWXWYWZWAnd is used for describing the relative position of the CCD camera to the photographed target. In order to meet the specific measurement application requirements, the coordinate system origin and each coordinate axis can be correspondingly processed, a plurality of local world coordinate systems can be simultaneously established, and the relative positions between the local world coordinate systems are converted into rigid transformation.
Camera coordinate system
Camera coordinate system OCXCYCZCThe method is used for describing the spatial position of the pixel points in the image in the three-dimensional coordinate system. Origin of coordinates OCIs the optical center of the CCD camera, OOCI.e. f is the focal length, XCAxis and YCThe axes being parallel to the rows and columns, Z, respectively, in the digital imageCThe direction of the axis is a direction perpendicular to the imaging plane and where the coordinates of any point pointing in front of the camera are positive.
Image physical coordinate system
In the physical coordinate system Oxy, an intersection O of the imaging plane of the camera and the optical axis of the lens is defined as the origin of the physical coordinate system. The X-axis and the y-axis of the two coordinate axes are respectively parallel to the X-axis and the y-axis of the camera coordinate systemCAnd YCThe axis is a rectangular coordinate system, and the unit of physical coordinates of the image in this item is millimeter.
Image pixel coordinate system
The image acquired by the CCD camera is processed in a digital image in a computer. So as to establishRectangular plane coordinate system of pixel-by-pixel dimensions, i.e. pixel coordinate system O puv, unit is pixel. Origin O of its coordinate systempLocated in the upper left corner of the pixel plane. The u-axis and the v-axis of the two coordinate axes of the pixel coordinate system are parallel to the x-axis and the y-axis of the image physical coordinate system, respectively.
The ideal camera linear imaging model is subjected to three-step coordinate transformation, rigid transformation is performed between a world coordinate system and a camera coordinate system, transformation is performed on an image physical coordinate system through perspective projection, and finally, an acquired image is digitized to obtain a target image. The transformation for the four coordinate systems is shown in FIG. 6;
i. the world coordinate P of the point PWConversion to camera coordinates PC
A space point P under the world coordinate systemW(XW,YW,ZW) Converting the space coordinate of T into camera coordinate to obtain point P in camera coordinate systemC(XC,YC,ZC) T, from OWXWYWZWTo OCXCYCZCBelonging to rigid transformations, the transformations of (1) are rotation and translation transformations, specifically expressed as:
Figure RE-GDA0003686681430000141
wherein, translation vector T ═ T (T)x,Ty,Tz) The rotation matrix R (θ, ψ, φ) is a 3 × 3 unit orthogonal matrix, which can be expressed as:
Figure RE-GDA0003686681430000142
ii. The camera coordinate P of the point PCConversion to physical coordinates P of the imageu
Coordinate P in camera coordinate systemC(XC,YC,ZC) T is converted into a coordinate P in a physical coordinate system of a point Pu(xu,yu),Coordinate system OCXCYCZC with O by using triangle similarity principle during pinhole imaging xyA transformation between, which is a perspective projective transformation, PCAnd PuThe conversion relationship is as follows:
Figure RE-GDA0003686681430000143
where f is the effective focal length of the camera lens. I.e. the distance of the origin of coordinates O of the image plane coordinate system from the optical center OC.
iii, calculating the image physical coordinate P of the point PuConversion to a point P in a pixel coordinate systemp(xp,yp)T
Since the actual imaging of the CCD camera is affected by lens distortion, the imaging model of the camera is non-linear, and the imaging process of the camera cannot be accurately described by using only a linear pinhole model, which requires introducing lens distortion of the camera into the imaging model of the camera. The lens mainly has three distortion modes, namely radial distortion, eccentric distortion and thin prism distortion, so that the deviation of an ideal point and an actual point from the radial direction and the tangential direction is caused. Considering the camera lens distortion, transform the point Pu to the point Pd in the image plane coordinate system, as shown in the following equation:
Figure RE-GDA0003686681430000151
in the formula, δ xr,δyr,δxd,δyd,δxpAnd δ ypThree lens distortions are respectively expressed as follows:
Figure RE-GDA0003686681430000152
Figure RE-GDA0003686681430000153
Figure RE-GDA0003686681430000154
wherein k is1,k2,p1,p2,s1And s2The distortion coefficient of the lens is called as the internal parameter of the camera. Point PdTransformation to a point P in the pixel coordinate systemp(xp,yp)TThe formula is as follows:
Figure RE-GDA0003686681430000155
in the formula u0,v0Respectively as the origin of coordinates O of the image plane coordinate systempCoordinates of two coordinate axes in the pixel coordinate system, theta 0Is the angle formed by the coordinate axes u and v.
Three parameters α, β, γ are defined as follows:
Figure RE-GDA0003686681430000156
based on plane constraints, dividing X in world coordinate systemWOWYWThe plane is established on the plane of the calibration plate, when Z isw0. Let i column of the rotation matrix R be RiAnd the coordinate transformation of the 4 steps is combined to obtain:
Figure RE-GDA0003686681430000157
defining R and T as external references of the CCD camera to illustrate the relative position relationship between the world coordinate system and the camera coordinate system, and defining A as an internal reference matrix of the camera, namely:
Figure RE-GDA0003686681430000161
2) the dial plate plane calibration method specifically comprises the following steps:
firstly, a dial plane space model is established, a plane principle can be determined by utilizing 3 points which are not collinear, at least three-dimensional coordinates of the 3 points on the dial plane under a camera coordinate system are obtained, and a plane equation of the dial plane under the camera coordinate system is fitted. The calibration paper with the same proportion and size as the optical calibration plate in the square form is fixed on a surface plane in a pasting mode to measure the space coordinate of a surface plane point, and the surface plane is parallel to a dial plane, so that the external parameters of the calibration plane can be used for fitting the dial plane. The calibration model of the dial plane is shown in fig. 7. And fixing the glass surface covering plane by using the check type calibration paper. World coordinates of corner points on the target plane are known, and pixel coordinates of the corner points are obtained by performing corner point detection on the calibration plane image. And solving external parameters R and T of the calibration plane under the camera coordinate system according to the world coordinates and the pixel coordinates of the corner points. And (4) obtaining the space angle between the dial plane and the camera plane according to the external parameters of the surface cover plane.
Calculating the parameter of the dial plate plane;
i. initial value solution of parameter
Constraint of camera parameters is H ═ H1 h2 h3]=λA[r1 r2 t]Wherein H is a homography matrix, provided
Figure RE-GDA0003686681430000162
Then H ═ H1 h2 h3]=λA[r 1r2 t]Is transformed into
Figure RE-GDA0003686681430000163
Let the i-th row vector of H be
Figure RE-GDA0003686681430000164
Then according to
Figure RE-GDA0003686681430000165
Can obtain
Figure RE-GDA0003686681430000166
Will be provided with
Figure RE-GDA0003686681430000167
And (3) performing matrix formation:
Figure RE-GDA0003686681430000171
the pixel coordinates and world coordinates of each corner point can yield 2 equations, and h1 T h2 T h3 T]TSince 8 unknowns are included, the homography matrix H can be obtained by knowing the coordinates of at least 4 corner points. H ═ λ A [ r ]1 r2 t]The internal reference A is obtained by camera calibration, so that external parameters R and T of a calibration plane can be solved.
Passing through type
Figure RE-GDA0003686681430000172
An internal reference a of the camera is obtained, above which a homography matrix H has been solved. The extrinsic rotation matrix R has unit orthogonality and is represented by formula
Figure RE-GDA0003686681430000173
Obtaining:
Figure RE-GDA0003686681430000174
wherein r1|=|r21, so we can get:
Figure RE-GDA0003686681430000175
is composed of
Figure RE-GDA0003686681430000176
The initial values of external parameter matrixes R and T of each image are obtained by formula
Figure RE-GDA0003686681430000177
Three parameters theta, psi, phi of the rotation matrix R can be solved.
ii. Parameter optimization
Because the camera lens is distorted actually and noise interference exists in the image, all camera parameters need to be optimized and solved, namely the obtained pixel coordinates, world coordinates and external parameters of the camera internal parameters and the angular points are used as initial values to optimize the external parameters. The present embodiment employs an L-M Algorithm (Levenberg-Marquardt Algorithm) to optimize the parameters. The optimization objective function is established as follows:
Figure RE-GDA0003686681430000181
Wherein the pixel coordinate
Figure RE-GDA0003686681430000182
Is the pixel coordinate, P, obtained by corner detectionwijAnd PfijIdeal world coordinates and the world coordinates of the non-linear model, respectively.
3) The reconstruction of the dial plane specifically comprises the following steps:
firstly, solving a dial plane; and determining a space plane through 3 non-collinear points on the space plane, and calibrating the dial plane according to the selected checkerboard calibration paper, thereby obtaining the space position of the dial plane. Let the equation of the dial plane under the camera coordinate system: b is a mixture of1XC+b2YC+b3ZC+1 ═ 0; substituting the three-dimensional coordinates of the angular points into a plane equation to obtain an overdetermined equation set:
Figure RE-GDA0003686681430000183
the above plane equation is written in matrix form:
Figure RE-GDA0003686681430000184
both sides simultaneously left ride
Figure RE-GDA0003686681430000185
To obtain
Figure RE-GDA0003686681430000186
Finishing to obtain:
Figure RE-GDA0003686681430000187
solving the system of equations to obtain b1、b2、b3And obtaining a plane equation of the calibrated dial plane under the camera coordinate system.
Secondly, extracting the characteristic space position of the dial, and calculating the optical center O of the camera according to the coordinates of the scale gravity center of the dial under a pixel coordinate system by using the internal reference matrix A of the cameraCLine O connecting to point PCPThe equation of the straight line of (c). Characteristic point E of dial is on straight line OCPAnd meanwhile, the dial plate is also on a calibration plane. The intersection point E can thus be found by knowing the equation of a straight line and the equation of a plane. Firstly, linear equation O is solved CP. From the camera imaging model and taking into account the distortion of the lens, the following is obtained:
Figure RE-GDA0003686681430000191
by the above introduction, the gravity center of the scale of the dial is extracted by using a gravity center method to obtain a pixel coordinate of a scale gravity center point; RANSAC least squares fit the pointer straight line to obtain the pixel coordinates of the points on the straight line. Suppose that the pixel coordinates are (u, v) in the image pixel coordinate systemTSubstituted into the formula
Figure RE-GDA0003686681430000192
Obtaining the actual physical coordinates (x) of the points on the diald,yd)T. The camera imaging model of the project considers radial distortion and centrifugal distortion, and introduces 4 distortion coefficients in total, thereby obtaining ideal physical coordinates (x) of the imageu,yu)TAnd actual physical coordinates (x)d,yd)TThe relationship of (a) is as follows:
Figure RE-GDA0003686681430000193
in the formula
Figure RE-GDA0003686681430000194
The ideal physical coordinate (x) of the image can be obtainedu,yu)T. According to the camera imaging model formula, the formula is changed into the following form:
Figure RE-GDA0003686681430000195
the ideal physical coordinate (x) of the imageu,yu)TSubstitution into
Figure RE-GDA0003686681430000196
To obtain a straight line OCEThe linear equation in the camera coordinate system is:
Figure RE-GDA0003686681430000197
to obtain a straight line OCPAnd solving the intersection point of the straight line and the calibration plane, namely the three-dimensional coordinate of the surface plane point P, by using a straight line equation under a camera coordinate system. Combined stand
Figure RE-GDA0003686681430000198
And b1XC+b2YC+b3ZCThe three-dimensional coordinates of the dial projection point P in the camera coordinates can be solved when +1 is 0, and the expression is as follows:
Figure RE-GDA0003686681430000201
establishing a local world coordinate system; in order to eliminate the space position error of the dial and simplify the calculation, a local world coordinate system O is established on the plane of the calibrated instrument LXLYLZLThereby converting the three-dimensional coordinates of the point in the camera coordinate system into OLXLYLZLO under the coordinate systemLXLYLTwo-dimensional coordinates in a plane. The dial plate recognition system model is shown in fig. 8, a plane pi 1 is a dial plate plane, wherein P is a point on the surface plane of the measured instrument, the point P' and the point P ″ respectively represent an ideal projection point and an actual projection point on the physical plane of the image, and a plane equation of the plane pi 1 in a camera coordinate system is set as follows: b is a mixture of1XC+b2YC+b3ZC+1 ═ 0; assuming P is a point on the dial plane, it is at OCXCYCZCCoordinate of lower PC(XC,YC,ZC)TAnd OLXLYLZLCoordinate P ofL(XL,YL,ZL)TThe transformation of (A) is as follows:
Figure RE-GDA0003686681430000202
for the positional relationship between the two coordinate systems, as shown in FIG. 9, a coordinate system OLXLYLZLCan be regarded as OCXCYCZCObtained by 3 transformations, i.e. winding around XCR of rotation of the shaft1And around YCRotation of the shaft R2Along ZCAxial movement TZ. Therefore, the coordinate system OLXLYLZLBy transformation of position to coordinate system OCXCYCZCAs shown in formula:
Figure RE-GDA0003686681430000203
solving three unknowns theta, phi and T of the coordinate system transformation process in the following wayZ
i. And calibrating an external parameter R of the instrument according to the camera, and performing optimization solution to obtain values of two rotation angles theta and phi.
ii. For translation TZThe solution of (2) can establish a translation relation between two coordinate systems as follows:
Figure RE-GDA0003686681430000204
the above equation is substituted into the dial plane equation:
b1XC1+b2YC1+b3(ZC1+TZ)+1=0
local coordinate system O obtained by coordinate system transformation LXLYLZLSatisfy OLXLYLThe plane being on the surface to be measured, i.e. the points on the surface plane satisfy ZLEqual to 0, so translated OC1On the measured surface plane. Therefore, the method can be obtained as follows: t is a unit ofZ=-1/b3
Fourthly, the dial plate characteristics of the local coordinate system are solved, and the points of the straight line selected by the RANSAC method are positioned in the local coordinate system OLXLYLZLThe coordinates of (x) belowLi,yLi) 1, 2, n; the scale gravity center point extracted by the gravity center method has the coordinate (x) under the local coordinate systemLi,yLi),j=1,2...,n。
Using least square algorithm to carry out point (x) under local coordinate systemLi,yLi) Fitting a pointer straight line equation, i.e. solving the values of the straight line parameters k and b by minimizing the sum of the squared residuals.
Using least square circle fitting algorithm to determine the point (x) of each scale gravity center point in the local coordinate systemLj, yLj) Determining the coordinate of the center of the dial as (x)L0,yL0)。
Since the plane of rotation of the hands is parallel to the plane of the dial, in order to make X under the local coordinate systemLOLYLThe rotation center of the inner pointer is coincided with the center of the dial. Correcting the position of the pointer straight line to make the pointer straight line equation pass through the center (x) of the scaleL0,yL0). Obtaining the linear equation of the corrected pointer as y-yL0=k(x-xL0)。
4) Identifying the space position dial reading; an angle method; and reading identification is carried out by an angle method, and the angle between the pointer straight line, the zero scale and full scale gravity center and the pointer rotation center under a local coordinate system is used for calculation. The angle method is prior art and will not be described herein in detail.
The correction of dial reading identification specifically comprises the following steps:
1) correcting the position error of the pointer; it is assumed that the camera plane is parallel to the dial plane pi 1, since the plane pi 2 in which the hands are located is not coincident with the plane pi 1, and the spatial position of the dial causes the dial to be inThe projection of the center on the camera plane is not coincident with the optical center of the lens, so that the reading recognition system generates a position recognition error. Because pi 1 and pi 2 are parallel, the pointer straight line is actually projected on the dial plate to form a straight line B2a1Straight line O with pointer photographed by camera3a2And are parallel. Assuming that a pointer linear equation is fitted according to the position of an image pointer as y ═ kx + b, and the circle center of a dial which is fitted by the dial through a gravity center method is (x)0,y0) The pointer is made to pass through the circle center of the scale, and the linear equation y-y is solved again0=k(x-x0) The linear intercept correction δ expression is: y is0-kx0-b。
2) And correcting the angle error of the pointer, assuming that the linear equation of the pointer of the original image is y ═ kx + b, and the pointer overrotates the center P (x)0,y0) Is of the formula
Figure RE-GDA0003686681430000221
Since the imaging process is a linear model, the center coordinate P' (x) is rotated and transformed0′,y0') it is necessary to make P coincide with P' by translation, i.e.
Figure RE-GDA0003686681430000222
The translation matrix is
Figure RE-GDA0003686681430000223
Therefore, the coordinates of each point in the image coordinate system are corrected to
Figure RE-GDA0003686681430000224
Assuming that the pointer rotation angle is α and the slope k is tan α, the slope of the pointer after transformation:
Figure RE-GDA0003686681430000225
Wherein, the first and the second end of the pipe are connected with each other,
Figure RE-GDA0003686681430000226
as a slope correction factor for the pointer straight line.
Simultaneous delta ═ y0-kx0B and
Figure RE-GDA0003686681430000227
the linear expression of the pointer after the linear model is obtained is as follows: y is (k + epsilon) x + b + delta.
Outputting a detection result; specifically, referring to fig. 10, after image processing, a first accurate reading h1 of the instantaneous dial time is obtained, after storage for about 24 hours, the above steps are repeated to obtain a second accurate reading h2 of the instantaneous dial, and the time difference of two times is compared with the standard time, so that the accurate measurement of the actual running day difference of the clock is realized.
The control mode of the invention is controlled by manually starting and closing the switch, the wiring diagram of the power element and the supply of the power supply belong to the common knowledge in the field, and the invention is mainly used for protecting mechanical devices, so the control mode and the wiring arrangement are not explained in detail in the invention.
The control mode of the invention is automatically controlled by the controller, the control circuit of the controller can be realized by simple programming of a person skilled in the art, the supply of the power supply also belongs to the common knowledge in the field, and the invention is mainly used for protecting mechanical devices, so the control mode and the circuit connection are not explained in detail in the invention.
While there have been shown and described what are at present considered the fundamental principles and essential features of the invention and its advantages, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. The automatic detection method for the clock travel accuracy based on the machine vision is characterized by specifically comprising the following steps of:
image acquisition and image preprocessing;
processing a clock image;
online detection of travel time of the clock;
outputting a detection result;
the clock image processing specifically comprises the following steps:
performing morphological processing on the dial image;
detecting the straight line of a clock pointer;
fitting dial scale marks;
and fitting the rotating center of the pointer.
2. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 1,
the dial plate image morphological processing specifically comprises the following steps:
corroding the dial plate image;
expanding the dial plate image;
opening and closing the dial image;
Dial plate image connected domain mark and region extraction;
and thinning the dial image.
3. An automated machine vision-based method for detecting the travel accuracy of a timepiece according to claim 1,
the clock pointer straight line detection specifically comprises the following steps:
fitting a Hough transform straight line;
and (5) fitting a least square straight line.
4. An automated machine vision-based method for detecting the travel accuracy of a timepiece according to claim 1,
the dial scale mark fitting is specific:
defining moments of the image;
digital features of the image are defined.
5. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 1,
the pointer rotation center fitting specifically comprises:
determining the gravity center position of the dial;
and determining the radius R and the coordinates of the circle center of the dial.
6. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 1,
the online travel time detection method for the clock specifically comprises the following steps:
establishing a camera imaging model;
calibrating a dial plate plane;
reconstructing a dial plane;
identifying the space position dial reading;
and (5) correcting the dial reading identification.
7. An automated machine vision-based method for detecting the travel accuracy of a timepiece according to claim 6,
the dial plate plane calibration method specifically comprises the following steps:
establishing a dial plate plane space model;
calculating dial plate plane parameters;
and outputting the world coordinates of the dial plane.
8. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 6,
the reconstructing of the dial plane specifically comprises the following steps:
solving a dial plane;
extracting a dial characteristic space position;
establishing a local world coordinate system;
and solving the dial features of the local coordinate system.
9. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 6,
and the reading of the dial plate at the identification space position is identified by adopting an angle method or a distance method.
10. The method for the automated machine vision-based detection of the accuracy of the travel of a timepiece according to claim 6,
the correction of dial reading identification specifically comprises the following steps:
correcting the position error of the pointer;
and correcting the angle error of the pointer.
CN202210241879.3A 2022-03-11 2022-03-11 Automatic detection method for clock travel accuracy based on machine vision Pending CN114757880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210241879.3A CN114757880A (en) 2022-03-11 2022-03-11 Automatic detection method for clock travel accuracy based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210241879.3A CN114757880A (en) 2022-03-11 2022-03-11 Automatic detection method for clock travel accuracy based on machine vision

Publications (1)

Publication Number Publication Date
CN114757880A true CN114757880A (en) 2022-07-15

Family

ID=82328241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210241879.3A Pending CN114757880A (en) 2022-03-11 2022-03-11 Automatic detection method for clock travel accuracy based on machine vision

Country Status (1)

Country Link
CN (1) CN114757880A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082666A (en) * 2022-08-23 2022-09-20 山东聊城中泰表业有限公司 Watch time-travelling precision verification method based on image understanding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082666A (en) * 2022-08-23 2022-09-20 山东聊城中泰表业有限公司 Watch time-travelling precision verification method based on image understanding

Similar Documents

Publication Publication Date Title
CN111260731B (en) Self-adaptive detection method for checkerboard sub-pixel level corner points
CN110675376A (en) PCB defect detection method based on template matching
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
CN107679535B (en) Automatic reading identification system and method for pointer type water meter based on template matching
CN100430690C (en) Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN113570605B (en) Defect detection method and system based on liquid crystal display panel
CN109508709B (en) Single pointer instrument reading method based on machine vision
Liu et al. An improved online dimensional measurement method of large hot cylindrical forging
JP2007128374A (en) Object recognition method, program for object recognition and storage medium therefor, and object recognition device
CN114331995A (en) Multi-template matching real-time positioning method based on improved 2D-ICP
CN107341824B (en) Comprehensive evaluation index generation method for image registration
CN105991913B (en) Method for positioning petal slot angle of camera module based on machine vision
CN112184811A (en) Monocular space structured light system structure calibration method and device
CN114463442A (en) Calibration method of non-coaxial camera
CN110852213A (en) Template matching-based pointer instrument multi-condition automatic reading method
CN114266764A (en) Character integrity detection method and device for printed label
CN113705564B (en) Pointer type instrument identification reading method
CN114757880A (en) Automatic detection method for clock travel accuracy based on machine vision
JP4003465B2 (en) Specific pattern recognition method, specific pattern recognition program, specific pattern recognition program recording medium, and specific pattern recognition apparatus
CN115930828A (en) Method and device for detecting contour dimension of surface coating of planar plate
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
CN114966238A (en) Automatic detection and alignment method for antenna phase center
CN112782176A (en) Product appearance detection method and device
CN113379846B (en) Method for calibrating rotating shaft of rotary table based on direction indication mark point calibration template

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination