CN112001379A - Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision - Google Patents
Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision Download PDFInfo
- Publication number
- CN112001379A CN112001379A CN202010425371.XA CN202010425371A CN112001379A CN 112001379 A CN112001379 A CN 112001379A CN 202010425371 A CN202010425371 A CN 202010425371A CN 112001379 A CN112001379 A CN 112001379A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- instrument
- pointer
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 33
- 238000003384 imaging method Methods 0.000 claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 38
- 239000011159 matrix material Substances 0.000 claims description 27
- 239000002245 particle Substances 0.000 claims description 12
- 238000013519 translation Methods 0.000 claims description 12
- 230000003287 optical effect Effects 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 2
- 238000004164 analytical calibration Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D18/00—Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/02—Recognising information on displays, dials, clocks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a machine vision-based correction algorithm for a fixed viewpoint reading instrument of an automobile instrument, which specifically comprises the following steps: establishing an instrument imaging geometric model; solving the required focal length f and the object distance H; performing binarization segmentation on the instrument image to obtain a dividing line and a pointer; marking the pointer and the graduation line, calculating the tip P of the pointeruCoordinates of points (x)u,yu) And according to PuCoordinates of points, find O1Pu(ii) a Calculating the length P according to the distance D between the pointer and the instrument paneluPu1(ii) a At O1PuFinding P on a linku1A point which is the tip P of the pointer after correctionu1(xu1,yu1) Solving the equation of a straight lineAnd obtaining the angle of rotation of the needle; by comparing the rotation angle of the pointer with the nearest graduation line angle, the number of the instrument can be distinguished. The invention reduces the complexity of the instrument checking control system and reduces the reading error caused by the visual angle deviation.
Description
Technical Field
The invention belongs to the technical field of machine vision detection, and relates to a correction algorithm of a fixed viewpoint reading instrument of an automobile instrument based on machine vision.
Background
The automobile instrument displays various data of the automobile and feeds back the working state of the automobile, thereby playing an important role in the safe driving of the automobile. At present, the instrument detection adopts a traditional manual observation mode, and whether the product quality is qualified or not is judged by visual inspection of the line pressing condition between each instrument pointer and the scale and the display state of each indicator lamp by a worker. The manual detection is influenced by subjective factors such as artificial observation angle, observation distance, eye fatigue degree and the like, and has a series of problems of low precision, poor reliability, poor repeatability, long detection time, low efficiency and the like. Therefore, it is desirable to find a method for reading the gauge indication without angular vision error and meeting the gauge reading criteria.
The instrument calibration is a precise test work, and the full-automatic calibration can be realized by utilizing computer vision regardless of a digital instrument or a pointer instrument. The automatic gauge calibration system requires that the viewpoint of the camera needs to be moved and the coordinate system needs to be calibrated again when the display value is read for the pointer deflection caused by each input quantity change. This adds significantly to the complexity of the control system, increases verification time, and mechanical failure may occur over long periods of mechanical use.
Disclosure of Invention
The invention aims to provide a machine vision-based correction algorithm for a fixed viewpoint reading instrument of an automobile instrument, which reduces the complexity of a control system for instrument verification and reduces reading errors caused by visual angle deviation.
The invention adopts the technical scheme that a correction algorithm of a fixed viewpoint reading instrument of an automobile instrument based on machine vision specifically comprises the following steps:
step 1, expressing a quantitative relation among a world coordinate system, a camera coordinate system and an image coordinate system by using a pinhole model of a camera, and establishing an instrument imaging geometric model;
step 2, by utilizing a radial balance condition, a two-step method proposed by tsai is adopted for calibrating the camera to obtain a required focal length f and an object distance H;
step 3, performing binarization segmentation on the instrument image to obtain a dividing line and a pointer;
step 4, marking the pointer and the graduation line obtained in the step 3, and calculating the tip P of the pointeruCoordinates of points (x)u,yu) And according to PuCoordinates of points, find O1Pu;
Step 5, calculating the length P according to the distance D between the pointer and the instrument paneluPu1;
Step 6, obtaining P according to step 5uPu1At O in1PuFinding P on a linku1A point which is the tip P of the pointer after correctionu1(xu1,yu1) Solving the equation of a straight lineAnd obtaining the angle of rotation of the needle;
and 7, comparing the rotating angle of the pointer obtained in the step 6 with the nearest dividing line angle, so that the number of the instrument can be judged.
The present invention is also characterized in that,
the specific process of the step 1 is as follows:
step 1.1, establishing a relation between a reference coordinate system and a camera coordinate system by a camera pinhole imaging model by introducing a rotation matrix R and a translation matrix T:
wherein R is a rotation matrix of 3X3, T is a translation matrix, and (X, y, z) are coordinates in a camera coordinate system, and (X)w,yw,zw) Coordinates under a reference coordinate system;
step 1.2, establishing a relation between an image coordinate system and a camera coordinate system, which comprises the following specific steps:
wherein (X, Y, Z) is a coordinate in an image coordinate system, (X)0,Y0) As coordinates of any point in the image coordinate system, dxFor the size of each pixel in the x-axis, dyThe size of each pixel on the y-axis;
step 1.3, setting a point P on the dial as P (X) in the reference coordinate systemw,Yw,Zw) The coordinate of the camera coordinate system is P (x, y, z), and the P point is imaged by the camera and is associated with an ideal image point P on the image planeu(xu,yuF) corresponding to each other, and finding an ideal projection point Pu(xu,yuF) and the distorted point P on the actual imaging planed(xd,ydThe relationship between f);
step 1.4, according to the upper point P on the actual imaging plane of the instrumentd(xd,yd) And the point S (u) in the computer memoryf,vf) And solving a forming geometric model of the instrument according to the relation between the two.
The specific process of step 1.3 is as follows:
step 1.3.1, according to Pu(xu,yuF) establishing an instrument imaging equation:
step 1.3.2, determining the relation between the image focal length f and the object distance H;
Z=H+f (5);
wherein Z is the coordinate of the instrument plane in the optical axis direction;
step 1.3.3, consider distortion pairs PuInfluence of a point, the control point on the distorted image being Pd(xd,ydF) inputting the image into a computer memory by image acquisition, setting PdCorresponding to S (u) in frame memory imagef,vfAnd f) establishing a correction matrix, which comprises the following specific steps:
step 1.3.4, the ideal projection point P is caused by the radial movement of the position of the imaging point due to the distortionu(xu,yuF) and P after distortiond(xd,ydAnd f) the relationship is:
wherein k is1Is a distortion coefficient, and r ═ xd 2+yd 2。
The specific process of step 1.4 is as follows:
taking the number of pixels as a unit, and introducing a scale factor from a linear unit to a pixel unituAndvis divided intoRespectively showing the distance between the centers of two adjacent pixels in the x direction and the y direction of the camera, and setting (u)0,v0) Computer image coordinates corresponding to the center of the imaging plane are represented
Combining the formulas (1) to (8) to obtain the expressible P point coordinate (X)w,Yw,Zw) With the image S (u) in computer memoryf,vfAnd f) the corresponding relationship between
Wherein
r2=u 2(uf-u0)2+v 2(vf-u0)2 (10);
r2Namely the finally established instrument forming geometric model.
The specific process of step 2 is as follows:
step 2.1, solving the coordinate of the P (x, y, z) point in a coordinate system and the values of a rotation matrix R and a translation matrix T of the camera;
the method specifically comprises the following steps: the following equation is established by using the spatial coordinates and the radial balance condition of the black dots of the template:
and (3) expanding x and y by combining the formula (1) to obtain the following formulas (12) and (13):
let ZwSubstituting actual N points (x) when equal to 0d,yd) Solving the equation sets (12) and (13) to obtain the values of the rotation matrix R and the translation matrix T of the camera and the coordinate values of the P (x, y, z) point;
step 2.2, substituting the result obtained in the step 2.1 into formulas (3), (4), (5) and (7), and simultaneously solving to obtain a distortion coefficient k1Focal length f and object distance H.
The specific process of the step 3 is as follows:
step 3.1, an initial threshold T is givenh=Th0If searching from the beginning, dividing the original meter image into two types of C1 and C2;
step 3.2, calculating the intra-class variance and mean of the C1 and C2 images respectively;
wherein f (x, y) is the acquired image; n is a radical ofc1Is the probability that the pixel is classified at C1; n is a radical ofc2Is the probability that the pixel is classified at C2; mu.s1Mean of C1 class images; mu.s2Mean of C2 class images; sigma2 1Variance of C1 class images; sigma2 2Variance of C2 class images;
step 3.3, classifying the images: if | f (x, y) - μ1|≤|f(x,y)-μ2|,F (x, y) belongs to C1, otherwise f (x, y) belongs to C2;
step 3.4, recalculating the mean value and the variance of the pixels in C1 and C2 obtained after reclassification in the step 3.3 according to formulas (14) to (17);
and 3.5, if the variance value of the current pixel point meets the following relation:
the calculated threshold value T is outputh(t-1), otherwise, reselecting the pixel point, and repeatedly executing the step 3.4 to the step 3.5;
and 3.6, classifying the images according to the threshold output in the step 3.5 to obtain black and white images only with the graduation lines and the pointers.
The specific process of the step 4 is as follows:
assuming that a point of 0 in the binary image is a background and a point of 1 is a particle, an eight-neighbor search is adopted in the early algorithm, which is as follows:
(1) starting the algorithm, and making the Label 1;
(2) scanning an image from left to right and from top to bottom, searching for a seed point with the value of 1, and setting a seed point Label as Label; if the seed point can not be found, the whole marking algorithm is ended;
(3) the following operations are performed for pixels with the same value around the seed point:
direction of x axis
(a) Scanning the image point by point from left to right, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(b) scanning the image point by point from right to left, and if f (x, y) is marked as Label, making the median value of the eight adjacent points of f (x, y) be 1 pixel point Label;
direction of y axis
(c) Scanning the image point by point from top to bottom, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(d) scanning the image point by point from bottom to top, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(4) after scanning in four directions, completely taking out the particles marked as L, and assigning a new Label to Label and repeating the step (2) until all the particles are labeled;
(5) outputting the coordinates of the particles with the farthest distance as PuCoordinates of the points;
according to PuThe coordinates of the points can be used to obtain O1PuComprises the following steps:
the specific process of step 5 is as follows:
suppose P1Is P2Perpendicular projection on the scale plane, corresponding to P on the image planeu1Point, but the reading criterion requires P1Point and P2The points should be the same point on the image plane;
from the geometric relationship, triangle O can be obtainedcPP1And OPuPu1And O andcPuO1and P2PP1Is a similar triangle, and can find P of the imageu1Point and find PuPu1The distance of (c).
The specific implementation process is as follows:
the specific process of step 6 is as follows:
step 6.1, at O1PuFinding the distance P on the connectionuPoint length of PuPu1That is the tip P of the pointer after correctionu1(xu1,yu1);
The specific process of step 7 is as follows:
step 7.1, drawing the corrected position of the instrument pointer on the background of an original image, and forming a new image for accurate reading through a reconstructed image when a viewpoint is fixed;
and 7.2, comparing the rotating angle of the pointer obtained in the step 6 with the nearest reference line angle, and judging the number of the instrument.
The invention has the beneficial effects that: the invention provides a correction algorithm of a fixed viewpoint reading instrument of an automobile instrument based on machine vision, which utilizes a radial balance condition, adopts a two-step method to calibrate a camera, determines a pointer tip through a corrected marking algorithm, further reconstructs an image to be compared with an original image, and solves the problem of accurate reading of the indication number of the instrument. The method reduces the complexity of a control system for checking the instrument, reduces reading errors caused by visual angle deviation, and has wide application in reading the instrument readings.
Drawings
FIG. 1 is an instrument imaging geometric model diagram established in a correction algorithm of an automobile instrument fixed viewpoint reading instrument based on machine vision;
FIG. 2 is a parameter correction network template established in a correction algorithm of an automobile instrument fixed viewpoint reading instrument based on machine vision;
FIG. 3 is a geometric correction model of instrument reading in the correction algorithm of the automobile instrument fixed viewpoint reading instrument based on machine vision.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a machine vision-based correction algorithm for a fixed viewpoint reading instrument of an automobile instrument, which specifically comprises the following steps:
step 1, expressing a quantitative relation among a world coordinate system, a camera coordinate system and an image coordinate system by using a pinhole model of a camera, and establishing an instrument imaging geometric model;
the specific process of the step 1 is as follows:
step 1.1, establishing a relation between a reference coordinate system and a camera coordinate system by a camera pinhole imaging model by introducing a rotation matrix R and a translation matrix T:
wherein R is a rotation matrix of 3X3, T is a translation matrix, and (X, y, z) are coordinates in a camera coordinate system, and (X)w,yw,zw) Coordinates under a reference coordinate system;
step 1.2, establishing a relation between an image coordinate system and a camera coordinate system, which comprises the following specific steps:
wherein (X, Y, Z) is a coordinate in an image coordinate system, (X)0,Y0) As coordinates of any point in the image coordinate system, dxFor the size of each pixel in the x-axis, dyThe size of each pixel on the y-axis;
the instrument imaging geometry model of fig. 1 is built using the relationship between the world coordinate system, the camera coordinate system and the image coordinate system.
In the world coordinate system (O)w,Xw,Yw,Zw) As a reference coordinate system, its origin OwThe coordinates of (a) are fixed. From point OcAnd Xc,Yc,ZcThe rectangular coordinate system formed by the axes is the camera coordinate system, point OcBeing the optical centre of the camera, Xc—YcThe plane is parallel to the CCD imaging plane. ZcThe axis is the optical axis, which is perpendicular to the image plane of the CCD camera, and its intersection point O with the image plane1Is the origin of the image plane coordinate system, OcO1Is the focal length f of the camera. The plane of the dial plate of the space instrument is parallel to the image plane, the intersection point of the plane and the optical axis is O', and the distance between the plane and the image plane is the object distance H.
Step 1.3, setting a point P on the dial as P (X) in the reference coordinate systemw,Yw,Zw) The coordinate of the camera coordinate system is P (x, y, z), and the P point is imaged by the camera and is associated with an ideal image point P on the image planeu(xu,yuF) corresponding to each other, and finding an ideal projection point Pu(xu,yuF) and the distorted point P on the actual imaging planed(xd,ydThe relationship between f);
according to the positions of the camera and the instrument dial, three coordinate systems, namely a world coordinate system, an image coordinate system and a camera coordinate system, are established from top to bottom. One point P on the dial is P (X) in the reference coordinate systemw,Yw,Zw) The coordinate of the camera coordinate system is P (x, y, z), and the point is imaged by the camera and is associated with an ideal image point P on the image planeu(xu,yuAnd f) correspond to each other.
The specific process of step 1.3 is as follows:
step 1.3.1, according to Pu(xu,yuF) establishing an instrument imaging equation:
step 1.3.2, determining the relation between the image focal length f and the object distance H;
Z=H+f (5);
wherein Z is the coordinate of the instrument plane in the optical axis direction;
step 1.3.3, consider distortion pairs PuInfluence of a point, the control point on the distorted image being Pd(xd,ydF) inputting the image into a computer memory by image acquisition, setting PdCorresponding to S (u) in frame memory imagef,vfAnd f) establishing a correction matrix, which comprises the following specific steps:
step 1.3.4, the ideal projection point P is caused by the radial movement of the position of the imaging point due to the distortionu(xu,yuF) and P after distortiond(xd,ydAnd f) the relationship is:
wherein k is1Is a distortion coefficient, and r ═ xd 2+yd 2。
Step 1.4, according to the upper point P on the actual imaging plane of the instrumentd(xd,yd) And the point S (u) in the computer memoryf,vf) And solving a forming geometric model of the instrument according to the relation between the two.
The specific process of step 1.4 is as follows:
taking the number of pixels as a unit, and introducing a scale factor from a linear unit to a pixel unituAndvrespectively showing the distance between the centers of two adjacent pixels in the x direction and the y direction of the camera, and setting (u)0,v0) Computer image coordinates corresponding to the center of the imaging plane are represented
Combining the formulas (1) to (8) to obtain the expressible P point coordinate (X)w,Yw,Zw) With the image S (u) in computer memoryf,vfAnd f) the corresponding relationship between
Wherein
r2=u 2(uf-u0)2+v 2(vf-u0)2 (10);
r2Namely the finally established instrument forming geometric model.
Step 2, by utilizing a radial balance condition, a two-step method proposed by tsai is adopted for calibrating the camera to obtain a required focal length f and an object distance H;
obtaining the image center (u) according to the parameters of the CCD camera0,v0) And according to the corresponding coordinates, solving internal and external parameters of the camera through the prior knowledge of the network template and imaging data thereof, and further obtaining the required focal length f and the required object distance H.
In an actual reference template, N black dots uniformly distributed on a white background are selected, and as shown in fig. 2, after projection is performed on an image plane, N points obtained through image processing represent the coordinate relationship of imaging. Spatial coordinates of black dots using a template, plus a radial balance condition (straight line O)1PdParallel to O' P, the cross product of the corresponding vectors is zero);
step 2.1, solving the coordinate of the P (x, y, z) point in a coordinate system and the values of a rotation matrix R and a translation matrix T of the camera;
the following equation is established:
and (3) expanding x and y by combining the formula (1) to obtain the following formulas (12) and (13):
let ZwSubstituting actual N points (x) when equal to 0d,yd) Solving the equation sets (12) and (13) to obtain the values of the rotation matrix R and the translation matrix T of the camera and the coordinate values of the P (x, y, z) point; n is more than or equal to 6;
step 2.2, substituting the result obtained in the step 2.1 into formulas (3), (4), (5) and (7), and simultaneously solving to obtain a distortion coefficient k1Focal length f and object distance H.
Step 3, performing binarization segmentation on the instrument image to obtain a dividing line and a pointer;
the specific process of the step 3 is as follows:
step 3.1, an initial threshold T is givenh=Th0If searching from the beginning, dividing the original meter image into two types of C1 and C2;
step 3.2, calculating the intra-class variance and mean of the C1 and C2 images respectively;
wherein f (x, y) is the acquired image; n is a radical ofc1Is the probability that the pixel is classified at C1; n is a radical ofc2Is the probability that the pixel is classified at C2; mu.s1Mean of C1 class images; mu.s2Mean of C2 class images; sigma2 1Variance of C1 class images; sigma2 2Variance of C2 class images;
in the formula, NimageIs the probability of a pixel in the image; p is a radical of1The distribution probability of the C1 type pixels in the image; p is a radical of2Is the distribution probability of the C2 type pixels in the image.
Step 3.3, classifying the images: if | f (x, y) - μ1|≤|f(x,y)-μ2If f (x, y) belongs to C1, otherwise f (x, y) belongs to C2;
step 3.4, recalculating the mean value and the variance of the pixels in C1 and C2 obtained after reclassification in the step 3.3 according to formulas (14) to (17);
and 3.5, if the variance value of the current pixel point meets the following relation:
the calculated threshold value T is outputh(t-1), otherwise, reselecting the pixel point, and repeatedly executing the step 3.4 to the step 3.5;
and 3.6, classifying the images according to the threshold output in the step 3.5 to obtain black and white images only with the graduation lines and the pointers.
Step 4, marking the pointer and the graduation line obtained in the step 3, and calculating the tip P of the pointeruCoordinates of points (x)u,yu) And according to PuCoordinates of points, find O1Pu;
As shown in FIG. 3, in the world coordinate system (O)w,Xw,Yw,Zw) As a reference coordinate system, its origin OwThe coordinates of (a) are fixed. From point OcAnd Xc,Yc,ZcThe rectangular coordinate system formed by the axes is the camera coordinate system, point OcBeing the optical centre of the camera, Xc—YcThe plane is parallel to the CCD imaging plane. ZcThe axis is the optical axis, which is perpendicular to the image plane of the CCD camera, and its intersection point O with the image plane1Is the origin of the image plane coordinate system, OcO1Is the focal length f of the camera. The plane of the dial plate of the space instrument is parallel to the image plane, the intersection point of the plane and the optical axis is O', and the distance between the plane and the image plane is the object distance H. Setting the distance between the pointer and the dial to D, O1XY is the imaging plane, P2Is the tip position of the pointer, PuIt is projected at the image plane.
The specific process of the step 4 is as follows:
assuming that a point of 0 in the binary image is a background and a point of 1 is a particle, an eight-neighbor search is used in the algorithm as follows:
(1) starting the algorithm, and making the Label 1;
(2) scanning the image from left to right and from top to bottom, searching for the seed point with the value of 1, and setting the seed point Label as Label. If the seed point can not be found, the whole marking algorithm is ended;
(3) the following operations are performed for pixels with the same value around the seed point:
direction of x axis
(a) Scanning the image point by point from left to right, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(b) scanning the image point by point from right to left, and if f (x, y) is marked as Label, making the median value of the eight adjacent points of f (x, y) be 1 pixel point Label;
direction of y axis
(c) Scanning the image point by point from top to bottom, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(d) scanning the image point by point from bottom to top, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(4) after scanning in four directions, completely taking out the particles marked as L, and assigning a new Label to Label and repeating the step (2) until all the particles are labeled;
(5) outputting the coordinates of the particles with the farthest distance as PuCoordinates of the points;
according to PuThe coordinates of the points can be used to obtain O1PuComprises the following steps:
step 5, calculating the length P according to the distance D between the pointer and the instrument paneluPu1;
The specific process of step 5 is as follows:
suppose P1Is P2Perpendicular projection on the scale plane, corresponding to P on the image planeu1Point (time of imaging P)u1Non-existent) but requires P for meter reading criteria1Point and P2The points should be the same point on the image plane;
from the geometric relationship, triangle O can be obtainedcPP1And OPuPu1And O andcPuO1and P2PP1Is a similar triangle, and can find P of the imageu1Point and find PuPu1The distance of (c).
The specific implementation process is as follows:
step 6, obtaining P according to step 5uPu1At O in1PuFinding P on a linku1A point which is the tip P of the pointer after correctionu1(xu1,yu1) Solving the equation of a straight lineAnd obtaining the rotating angle of the pointer;
the specific process of step 6 is as follows:
step 6.1, at O1PuFinding the distance P on the connectionuPoint length of PuPu1That is the tip P of the pointer after correctionu1(xu1,yu1);
And 7, comparing the rotating angle of the pointer obtained in the step 6 with the nearest dividing line angle, so that the number of the instrument can be judged.
The specific process of step 7 is as follows:
step 7.1, drawing the corrected position of the instrument pointer on the background of an original image, and forming a new image for accurate reading through a reconstructed image when a viewpoint is fixed;
and 7.2, comparing the rotating angle of the pointer obtained in the step 6 with the nearest reference line angle, and judging the number of the instrument.
Claims (10)
1. A correction algorithm for a fixed viewpoint reading instrument of an automobile instrument based on machine vision is characterized in that: the method specifically comprises the following steps:
step 1, expressing a quantitative relation among a world coordinate system, a camera coordinate system and an image coordinate system by using a pinhole model of a camera, and establishing an instrument imaging geometric model;
step 2, by utilizing a radial balance condition, a two-step method proposed by tsai is adopted for calibrating the camera to obtain a required focal length f and an object distance H;
step 3, performing binarization segmentation on the instrument image to obtain a dividing line and a pointer;
step 4, marking the pointer and the graduation line obtained in the step 3, and calculating the tip P of the pointeruCoordinates of points (x)u,yu) And according to PuCoordinates of points, find O1Pu;
Step 5, calculating the length P according to the distance D between the pointer and the instrument paneluPu1;
Step 6, obtaining P according to step 5uPu1At O in1PuFinding P on a linku1A point which is the tip P of the pointer after correctionu1(xu1,yu1) Solving the equation of a straight lineAnd obtaining the angle of rotation of the needle;
and 7, comparing the rotating angle of the pointer obtained in the step 6 with the nearest dividing line angle, so that the number of the instrument can be judged.
2. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 1, is characterized in that: the specific process of the step 1 is as follows:
step 1.1, establishing a relation between a reference coordinate system and a camera coordinate system by a camera pinhole imaging model by introducing a rotation matrix R and a translation matrix T:
wherein R is a rotation matrix of 3X3, T is a translation matrix, and (X, y, z) are coordinates in a camera coordinate system, and (X)w,yw,zw) Coordinates under a reference coordinate system;
step 1.2, establishing a relation between an image coordinate system and a camera coordinate system, which comprises the following specific steps:
wherein (X, Y, Z) is a coordinate in an image coordinate system, (X)0,Y0) As coordinates of any point in the image coordinate system, dxFor the size of each pixel in the x-axis, dyThe size of each pixel on the y-axis;
step 1.3, setting a point P on the dial as P (X) in the reference coordinate systemw,Yw,Zw) The coordinate of the camera coordinate system is P (x, y, z), and the P point is imaged by the camera and is associated with an ideal image point P on the image planeu(xu,yuF) corresponding to each other, and finding an ideal projection point Pu(xu,yuF) and the distorted point P on the actual imaging planed(xd,ydThe relationship between f);
step 1.4, according to the upper point P on the actual imaging plane of the instrumentd(xd,yd) And the point S (u) in the computer memoryf,vf) And solving a forming geometric model of the instrument according to the relation between the two.
3. The correction algorithm of the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 2, characterized in that: the specific process of the step 1.3 is as follows:
step 1.3.1, according to Pu(xu,yuF) establishing an instrument imaging equation:
step 1.3.2, determining the relation between the image focal length f and the object distance H;
Z=H+f (5);
wherein Z is the coordinate of the instrument plane in the optical axis direction;
step 1.3.3, consider distortion pairs PuInfluence of a point, the control point on the distorted image being Pd(xd,ydF) inputting the image into a computer memory by image acquisition, setting PdCorresponding to S (u) in frame memory imagef,vfAnd f) establishing a correction matrix, which comprises the following specific steps:
step 1.3.4, the ideal projection point P is caused by the radial movement of the position of the imaging point due to the distortionu(xu,yuF) and P after distortiond(xd,ydAnd f) the relationship is:
wherein k is1Is a distortion coefficient, and r ═ xd 2+yd 2。
4. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 3, characterized in that: the specific process of the step 1.4 is as follows:
taking the number of pixels as a unit, and introducing a scale factor from a linear unit to a pixel unituAndvrespectively showing the distance between the centers of two adjacent pixels in the x direction and the y direction of the camera, and setting (u)0,v0) Computer image coordinates corresponding to the center of the imaging plane are represented
Combining the formulas (1) to (8) to obtain the expressible P point coordinate (X)w,Yw,Zw) With the image S (u) in computer memoryf,vfAnd f) the corresponding relationship between
Wherein
r2=u 2(uf-u0)2+vv 2(vf-u0)2 (10);
r2Namely the finally established instrument forming geometric model.
5. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 4, is characterized in that: the specific process of the step 2 is as follows:
step 2.1, solving the coordinate of the P (x, y, z) point in a coordinate system and the values of a rotation matrix R and a translation matrix T of the camera;
the method specifically comprises the following steps: the following equation is established by using the spatial coordinates and the radial balance condition of the black dots of the template:
and (3) expanding x and y by combining the formula (1) to obtain the following formulas (12) and (13):
let ZwSubstituting actual N points (x) when equal to 0d,yd) Solving the equation sets (12) and (13) to obtain the values of the rotation matrix R and the translation matrix T of the camera and the coordinate values of the P (x, y, z) point;
step 2.2, substituting the result obtained in the step 2.1 into formulas (3), (4), (5) and (7), and simultaneously solving to obtain a distortion coefficient k1Focal length f and object distance H.
6. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 5, is characterized in that: the specific process of the step 3 is as follows:
step 3.1, an initial threshold T is givenh=Th0If searching from the beginning, dividing the original meter image into two types of C1 and C2;
step 3.2, calculating the intra-class variance and mean of the C1 and C2 images respectively;
wherein f (x, y) is the acquired image; n is a radical ofc1Is the probability that the pixel is classified at C1; n is a radical ofc2Is the probability that the pixel is classified at C2; mu.s1Mean of C1 class images; mu.s2Mean of C2 class images; sigma2 1Variance of C1 class images; sigma2 2Variance of C2 class images;
step 3.3, classifying the images: if | f (x, y) - μ1|≤|f(x,y)-μ2If f (x, y) belongs to C1, otherwise f (x, y) belongs to C2;
step 3.4, recalculating the mean value and the variance of the pixels in C1 and C2 obtained after reclassification in the step 3.3 according to formulas (14) to (17);
and 3.5, if the variance value of the current pixel point meets the following relation:
the calculated threshold value T is outputh(t-1), otherwise, reselecting the pixel point, and repeatedly executing the step 3.4 to the step 3.5;
and 3.6, classifying the images according to the threshold output in the step 3.5 to obtain black and white images only with the graduation lines and the pointers.
7. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 6, is characterized in that: the specific process of the step 4 is as follows:
assuming that a point of 0 in the binary image is a background and a point of 1 is a particle, an eight-neighbor search is adopted in the early algorithm, which is as follows:
(1) starting the algorithm, and making the Label 1;
(2) scanning the image from left to right and from top to bottom, searching for a seed point with the value of 1, setting a seed point Label as a Label, and if the seed point cannot be found, ending the whole labeling algorithm;
(3) the following operations are performed for pixels with the same value around the seed point:
direction of x axis
(a) Scanning the image point by point from left to right, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(b) scanning the image point by point from right to left, and if f (x, y) is marked as Label, making the median value of the eight adjacent points of f (x, y) be 1 pixel point Label;
direction of y axis
(c) Scanning the image point by point from top to bottom, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(d) scanning the image point by point from bottom to top, and if f (x, y) is marked as Label, marking the pixel point with the median value of 1 in the eight adjacent points f (x, y) as Label;
(4) after scanning in four directions, completely taking out the particles marked as L, and assigning a new Label to Label and repeating the step (2) until all the particles are labeled;
(5) outputting the coordinates of the particles with the farthest distance as PuCoordinates of the points;
according to PuThe coordinates of the points can be used to obtain O1PuComprises the following steps:
8. the correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 7 is characterized in that: the specific process of the step 5 is as follows:
suppose P1Is P2Perpendicular projection on the scale plane, corresponding to P on the image planeu1Point, but the reading criterion requires P1Point and P2The points should be the same point on the image plane;
from the geometric relationship, triangle O can be obtainedcPP1And OPuPu1And O andcPuO1and P2PP1Is a similar triangle, and can find P of the imageu1Point and find PuPu1The distance of (c).
The specific implementation process is as follows:
9. the correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 8, characterized in that: the specific process of the step 6 is as follows:
step 6.1, at O1PuFinding the distance P on the connectionuPoint length of PuPu1That is the tip P of the pointer after correctionu1(xu1,yu1);
10. The correction algorithm for the fixed viewpoint reading instrument of the automobile instrument based on the machine vision as claimed in claim 9, wherein: the specific process of the step 7 is as follows:
step 7.1, drawing the corrected position of the instrument pointer on the background of an original image, and forming a new image for accurate reading through a reconstructed image when a viewpoint is fixed;
and 7.2, comparing the rotating angle of the pointer obtained in the step 6 with the nearest reference line angle, and judging the number of the instrument.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010425371.XA CN112001379A (en) | 2020-05-19 | 2020-05-19 | Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010425371.XA CN112001379A (en) | 2020-05-19 | 2020-05-19 | Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112001379A true CN112001379A (en) | 2020-11-27 |
Family
ID=73461484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010425371.XA Pending CN112001379A (en) | 2020-05-19 | 2020-05-19 | Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001379A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668578A (en) * | 2020-12-31 | 2021-04-16 | 中广核研究院有限公司 | Pointer instrument reading method and device, computer equipment and storage medium |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08201021A (en) * | 1995-01-23 | 1996-08-09 | Mazda Motor Corp | Calibration method |
JP2005032028A (en) * | 2003-07-07 | 2005-02-03 | Ntt Power & Building Facilities Inc | Method for reading indicator value of indicator needle rotating meter, device for reading indicator value of indicator needle rotating meter, and program for reading meter indicator value |
CN102693543A (en) * | 2012-05-21 | 2012-09-26 | 南开大学 | Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments |
CN102799867A (en) * | 2012-07-09 | 2012-11-28 | 哈尔滨工业大学 | Meter pointer angle identification method based on image processing |
CN103714329A (en) * | 2013-12-31 | 2014-04-09 | 长安大学 | Detecting algorithm for identifying meter needle |
CN105096317A (en) * | 2015-07-03 | 2015-11-25 | 吴晓军 | Fully automatic calibration method for high performance camera under complicated background |
CN105740829A (en) * | 2016-02-02 | 2016-07-06 | 暨南大学 | Scanning line processing based automatic reading method for pointer instrument |
CN105740856A (en) * | 2016-01-28 | 2016-07-06 | 宁波理工监测科技股份有限公司 | Method for reading readings of pointer instrument based on machine vision |
CN106570906A (en) * | 2016-11-09 | 2017-04-19 | 东南大学 | Rectangular pattern-based method for detecting distances under camera angle deflection condition |
CN106599897A (en) * | 2016-12-09 | 2017-04-26 | 广州供电局有限公司 | Machine vision-based pointer type meter reading recognition method and device |
CN107014312A (en) * | 2017-04-25 | 2017-08-04 | 西安交通大学 | A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system |
CN107085853A (en) * | 2017-05-04 | 2017-08-22 | 中国矿业大学 | Guide rail single eye stereo vision mining area derrick deformation monitoring method |
CN107167169A (en) * | 2017-07-03 | 2017-09-15 | 吉林大学 | Readings of pointer type meters identification measuring method based on NI Vision Builder for Automated Inspection |
CN107167172A (en) * | 2017-04-17 | 2017-09-15 | 江苏大学 | A kind of on-line monitoring method of bus type automobile digital instrument pointer functionality |
CN107463931A (en) * | 2017-07-06 | 2017-12-12 | 国家电网公司 | A kind of real-time pointer instrument reading method and device based on ARM platforms |
CN108009535A (en) * | 2017-11-21 | 2018-05-08 | 武汉中元华电科技股份有限公司 | A kind of simple pointer meter reading method based on machine vision |
CN109558871A (en) * | 2018-10-26 | 2019-04-02 | 中国科学院长春光学精密机械与物理研究所 | A kind of readings of pointer type meters recognition methods and device |
CN110189314A (en) * | 2019-05-28 | 2019-08-30 | 长春大学 | Automobile instrument panel image position method based on machine vision |
CN110414510A (en) * | 2019-07-26 | 2019-11-05 | 华中科技大学 | A kind of readings of pointer type meters bearing calibration |
US20200034987A1 (en) * | 2018-07-25 | 2020-01-30 | Beijing Smarter Eye Technology Co. Ltd. | Method and device for building camera imaging model, and automated driving system for vehicle |
CN110852213A (en) * | 2019-10-30 | 2020-02-28 | 天津大学 | Template matching-based pointer instrument multi-condition automatic reading method |
CN111080711A (en) * | 2019-12-05 | 2020-04-28 | 东南大学 | Method for calibrating microscopic imaging system in approximately parallel state based on magnification |
CN111091076A (en) * | 2019-12-03 | 2020-05-01 | 西北工业大学 | Tunnel limit data measuring method based on stereoscopic vision |
-
2020
- 2020-05-19 CN CN202010425371.XA patent/CN112001379A/en active Pending
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08201021A (en) * | 1995-01-23 | 1996-08-09 | Mazda Motor Corp | Calibration method |
JP2005032028A (en) * | 2003-07-07 | 2005-02-03 | Ntt Power & Building Facilities Inc | Method for reading indicator value of indicator needle rotating meter, device for reading indicator value of indicator needle rotating meter, and program for reading meter indicator value |
CN102693543A (en) * | 2012-05-21 | 2012-09-26 | 南开大学 | Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments |
CN102799867A (en) * | 2012-07-09 | 2012-11-28 | 哈尔滨工业大学 | Meter pointer angle identification method based on image processing |
CN103714329A (en) * | 2013-12-31 | 2014-04-09 | 长安大学 | Detecting algorithm for identifying meter needle |
CN105096317A (en) * | 2015-07-03 | 2015-11-25 | 吴晓军 | Fully automatic calibration method for high performance camera under complicated background |
CN105740856A (en) * | 2016-01-28 | 2016-07-06 | 宁波理工监测科技股份有限公司 | Method for reading readings of pointer instrument based on machine vision |
CN105740829A (en) * | 2016-02-02 | 2016-07-06 | 暨南大学 | Scanning line processing based automatic reading method for pointer instrument |
CN106570906A (en) * | 2016-11-09 | 2017-04-19 | 东南大学 | Rectangular pattern-based method for detecting distances under camera angle deflection condition |
CN106599897A (en) * | 2016-12-09 | 2017-04-26 | 广州供电局有限公司 | Machine vision-based pointer type meter reading recognition method and device |
CN107167172A (en) * | 2017-04-17 | 2017-09-15 | 江苏大学 | A kind of on-line monitoring method of bus type automobile digital instrument pointer functionality |
CN107014312A (en) * | 2017-04-25 | 2017-08-04 | 西安交通大学 | A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system |
CN107085853A (en) * | 2017-05-04 | 2017-08-22 | 中国矿业大学 | Guide rail single eye stereo vision mining area derrick deformation monitoring method |
CN107167169A (en) * | 2017-07-03 | 2017-09-15 | 吉林大学 | Readings of pointer type meters identification measuring method based on NI Vision Builder for Automated Inspection |
CN107463931A (en) * | 2017-07-06 | 2017-12-12 | 国家电网公司 | A kind of real-time pointer instrument reading method and device based on ARM platforms |
CN108009535A (en) * | 2017-11-21 | 2018-05-08 | 武汉中元华电科技股份有限公司 | A kind of simple pointer meter reading method based on machine vision |
US20200034987A1 (en) * | 2018-07-25 | 2020-01-30 | Beijing Smarter Eye Technology Co. Ltd. | Method and device for building camera imaging model, and automated driving system for vehicle |
CN109558871A (en) * | 2018-10-26 | 2019-04-02 | 中国科学院长春光学精密机械与物理研究所 | A kind of readings of pointer type meters recognition methods and device |
CN110189314A (en) * | 2019-05-28 | 2019-08-30 | 长春大学 | Automobile instrument panel image position method based on machine vision |
CN110414510A (en) * | 2019-07-26 | 2019-11-05 | 华中科技大学 | A kind of readings of pointer type meters bearing calibration |
CN110852213A (en) * | 2019-10-30 | 2020-02-28 | 天津大学 | Template matching-based pointer instrument multi-condition automatic reading method |
CN111091076A (en) * | 2019-12-03 | 2020-05-01 | 西北工业大学 | Tunnel limit data measuring method based on stereoscopic vision |
CN111080711A (en) * | 2019-12-05 | 2020-04-28 | 东南大学 | Method for calibrating microscopic imaging system in approximately parallel state based on magnification |
Non-Patent Citations (4)
Title |
---|
LAI, HW等: "A Novel Scale Recognition Method for Pointer Meters Adapted to Different Types and Shapes", 2019 IEEE 15TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING, pages 374 - 379 * |
LI, Q等: "Automatic Reading System based on Automatic Alignment Control for Pointer Meter", IECON 2014 - 40TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, pages 3414 - 3418 * |
佘世洲;宋凯;刘辉;谭守标;张骥;: "一种电力指针式仪表示数自动识别的鲁棒方法", 计算机技术与发展, no. 04, pages 200 - 203 * |
金施群;张莉;范利勤;: "单幅图像的三维重构视觉系统误差修正", 应用科学学报, no. 06, pages 81 - 85 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668578A (en) * | 2020-12-31 | 2021-04-16 | 中广核研究院有限公司 | Pointer instrument reading method and device, computer equipment and storage medium |
CN112668578B (en) * | 2020-12-31 | 2023-11-07 | 中广核研究院有限公司 | Pointer type instrument reading method, pointer type instrument reading device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110555889B (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN111380502B (en) | Calibration method, position determination method, device, electronic equipment and storage medium | |
US7830374B2 (en) | System and method for integrating dispersed point-clouds of multiple scans of an object | |
CN107014312A (en) | A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system | |
CN109961485A (en) | A method of target positioning is carried out based on monocular vision | |
CN109544628A (en) | A kind of the accurate reading identifying system and method for pointer instrument | |
JPH10501945A (en) | Method and apparatus for transforming a coordinate system in an automatic video monitor alignment system | |
CN109272555B (en) | External parameter obtaining and calibrating method for RGB-D camera | |
JP2000241120A (en) | Measuring apparatus | |
CN111047586B (en) | Pixel equivalent measuring method based on machine vision | |
WO2024011764A1 (en) | Calibration parameter determination method and apparatus, hybrid calibration board, device, and medium | |
CN115187612A (en) | Plane area measuring method, device and system based on machine vision | |
CN107067441B (en) | Camera calibration method and device | |
CN115540775A (en) | 3D video extensometer of CCD single-phase machine | |
CN112001379A (en) | Correction algorithm of automobile instrument fixed viewpoint reading instrument based on machine vision | |
CN112308890B (en) | Standard ball-assisted reliable registration method for industrial CT measurement coordinate system | |
CN111738971A (en) | Circuit board stereo scanning detection method based on line laser binocular stereo vision | |
CN109470290B (en) | Automatic calibration method and device for instrument pointer | |
CN115289997B (en) | Binocular camera three-dimensional contour scanner and application method thereof | |
CN115512343A (en) | Method for correcting and recognizing reading of circular pointer instrument | |
CN113610086A (en) | Reading correction method and device for vertical-scale pointer instrument and terminal equipment | |
CN113989513A (en) | Method for recognizing reading of square pointer type instrument | |
CN107945236B (en) | Sub-pixel level nonlinear calibration method suitable for general optical system | |
CN112935562A (en) | Laser precision machining method based on paraxial offline measurement | |
CN112308933A (en) | Method and device for calibrating camera internal reference and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20240927 |
|
AD01 | Patent right deemed abandoned |