CN114820787B - Image correction method and system for large-view-field plane vision measurement - Google Patents

Image correction method and system for large-view-field plane vision measurement Download PDF

Info

Publication number
CN114820787B
CN114820787B CN202210428357.4A CN202210428357A CN114820787B CN 114820787 B CN114820787 B CN 114820787B CN 202210428357 A CN202210428357 A CN 202210428357A CN 114820787 B CN114820787 B CN 114820787B
Authority
CN
China
Prior art keywords
image
partition
plane
checkerboard
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210428357.4A
Other languages
Chinese (zh)
Other versions
CN114820787A (en
Inventor
张来刚
张云龙
徐立鹏
孙群
郭宏亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaocheng University
Original Assignee
Liaocheng University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaocheng University filed Critical Liaocheng University
Priority to CN202210428357.4A priority Critical patent/CN114820787B/en
Publication of CN114820787A publication Critical patent/CN114820787A/en
Application granted granted Critical
Publication of CN114820787B publication Critical patent/CN114820787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image correction method and system for large-view-field plane vision measurement, wherein the method comprises the following steps: acquiring a checkerboard image of a plane to be detected, extracting actual image coordinates of the mark points, reasonably partitioning the image according to the distribution condition of the mark points, constructing ideal image coordinates of each mark point, and acquiring a training database of each partition; training a deep learning network model of each partition by utilizing a training database; according to the trained model, calculating ideal image coordinates of pixel points in each partition, establishing a remap matrix of each partition, generating an undistorted front view of each partition by using the remap matrix, and splicing front views of the respective partitions to generate an undistorted front view of the whole image. The invention adopts the strategy of image partition correction and splicing fusion, can finish the accurate correction of the large-view-field and high-distortion image on the premise of no need of camera calibration, and improves the image correction precision and correction efficiency.

Description

Image correction method and system for large-view-field plane vision measurement
Technical Field
The invention relates to the technical field of image correction, in particular to an image correction method and system for large-view-field plane vision measurement.
Background
At present, the traditional image correction method firstly needs to calibrate a camera, namely, calculates the internal and external parameters of the camera, and then corrects the shot image by utilizing the internal and external parameters of the camera so as to obtain the image with smaller distortion. In the process, the accuracy of the calculation of the internal and external parameters of the camera under a large field of view can directly influence the correction effect of the image. Meanwhile, in order to obtain more accurate internal and external parameters of the camera, the camera calibration under a large visual field needs to shoot calibration targets at different positions and positions, which is time-consuming and labor-consuming and often cannot obtain a very good calibration effect. Therefore, research on a method for accurately correcting a large-view-field high-distortion image without camera calibration is a technical problem to be solved in the field.
Disclosure of Invention
The invention aims to provide an image correction method and an image correction system for large-view-field plane vision measurement, which can finish accurate correction of images with large view fields and high distortion on the premise of no camera calibration, and improve image correction precision and correction efficiency.
In order to achieve the above object, the present invention provides the following solutions:
an image correction method for large-view-field plane vision measurement comprises the following steps:
obtaining a checkerboard image of a plane to be measured; a checkerboard calibration plate is arranged on the plane to be measured;
Detecting mark points of the checkerboard image, extracting actual image coordinates of the mark points and partitioning the checkerboard image;
setting ideal image coordinates for the mark points, and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points;
Establishing a deep learning network model, and training the deep learning network model for each partition by utilizing a training database of each partition to generate a trained image correction model of each partition;
Calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition, and calculating a remap matrix of each partition; the method specifically comprises the following steps: calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition; constructing a remap matrix of each partition according to the mapping relation between the actual image coordinates and the ideal image coordinates of all pixel points in each partition;
acquiring a plane image to be detected and partitioning the plane image to be detected;
Generating an undistorted front view of each partition of the plane to be tested by using the remap matrix of each partition;
and splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
Optionally, the acquiring a checkerboard image of the plane to be measured specifically includes:
Fixing a camera with a wide-angle lens above the plane view field to be detected, and supplementing light to a shooting area by utilizing an LED light source;
And placing the checkerboard calibration plate on the plane to be tested, and controlling the camera to shoot by using a computer to acquire the checkerboard image.
Optionally, the detecting the marker point of the checkerboard image, extracting the actual image coordinates of the marker point and partitioning the checkerboard image specifically includes:
Detecting mark points of the checkerboard image, and extracting corner points of the checkerboard as the mark points;
Extracting actual image coordinates of the mark points;
partitioning the checkerboard image according to the actual image coordinate distribution of the mark points; and the number of the mark points contained in each partition is greater than or equal to 60.
Optionally, the setting ideal image coordinates for the marker points, and building a training database of each partition according to the actual image coordinates and the ideal image coordinates of the marker points specifically includes:
setting ideal image coordinates for the mark points according to the actual image coordinate distribution of the mark points and the image correction targets;
and forming a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
An image correction system for large field-of-view planar vision measurement, comprising:
the checkerboard image acquisition module is used for acquiring a checkerboard image of a plane to be detected; a checkerboard calibration plate is arranged on the plane to be measured;
The mark point detection and partitioning module is used for detecting mark points of the checkerboard image, extracting actual image coordinates of the mark points and partitioning the checkerboard image;
the training database establishing module is used for setting ideal image coordinates for the mark points and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points;
The image correction model building module is used for building a deep learning network model, training the deep learning network model for each partition by utilizing the training database of each partition, and generating a trained image correction model for each partition;
The remap matrix building module is used for calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition and calculating the remap matrix of each partition; the remap matrix establishment module specifically comprises: an ideal image coordinate calculation unit, configured to calculate ideal image coordinates of all pixel points in each partition by using the image correction model of each partition; the remap matrix establishing unit is used for establishing a remap matrix of each partition according to the mapping relation between the actual image coordinates and the ideal image coordinates of all pixel points in each partition;
the plane image acquisition module to be detected is used for acquiring a plane image to be detected and partitioning the plane image to be detected;
The partition undistorted elevation view generation module is used for generating an undistorted elevation view of each partition of the plane to be measured by using the remap matrix of each partition;
the undistorted front view generation module of the plane to be detected is used for splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
Optionally, the checkerboard image acquisition module specifically includes:
the camera setting unit is used for fixing a camera with a wide-angle lens above the plane view field to be detected, and supplementing light to a shooting area by utilizing an LED light source;
The checkerboard image acquisition unit is used for placing the checkerboard calibration plate on the plane to be detected, and the computer is used for controlling the camera to shoot and acquire the checkerboard image.
Optionally, the mark point detecting and partitioning module specifically includes:
the mark point detection unit is used for detecting mark points of the checkerboard image and extracting corner points of the checkerboard as the mark points;
An actual image coordinate extracting unit, configured to extract actual image coordinates of the marker points;
The partitioning unit is used for partitioning the checkerboard image according to the actual image coordinate distribution of the mark points; and the number of the mark points contained in each partition is greater than or equal to 60.
Optionally, the training database building module specifically includes:
An ideal image coordinate setting unit for setting ideal image coordinates for the marker points according to actual image coordinate distribution of the marker points and the image correction target;
and the training database establishing unit is used for constructing a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
The invention provides an image correction method and system for large-view-field plane vision measurement, wherein the method comprises the following steps: obtaining a checkerboard image of a plane to be measured; detecting mark points of the checkerboard image, extracting actual image coordinates of the mark points and partitioning the checkerboard image; setting ideal image coordinates for the mark points, and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points; establishing a deep learning network model, and training the deep learning network model for each partition by utilizing a training database of each partition to generate a trained image correction model of each partition; calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition, and calculating a remap matrix of each partition; acquiring a plane image to be detected and partitioning the plane image to be detected; generating an undistorted front view of each partition of the plane to be tested by using the remap matrix of each partition; and splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected. The method adopts the strategy of image partition correction and splicing fusion to correct the large-view-field and high-distortion images, and the partition strategy reduces the complexity of the deep learning network model and improves the training efficiency of the model; generating a mapping matrix remap matrix of image correction by using the trained image correction model, and completing image correction by adopting a multithreading technology and a bilinear interpolation method, thereby improving the image correction efficiency; correcting images based on the deep learning model also avoids the calculation of camera internal and external parameters and lens distortion coefficients. Therefore, the image correction method can finish the accurate correction of the image with large field of view and high distortion on the premise of no camera calibration, and improves the image correction precision and correction efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image correction method for large-view-field plane vision measurement provided by the invention;
FIG. 2 is a schematic diagram of a checkerboard image provided by the present invention;
FIG. 3 is a schematic diagram of checkerboard marker point detection provided by the present invention;
FIG. 4 is a schematic diagram of a checkerboard image partition provided by the present invention;
FIG. 5 is a schematic diagram of a deep learning network model according to the present invention;
FIG. 6 is a schematic diagram of the distribution of actual coordinates, ideal coordinates and corrected coordinates of a first partition marker point of a checkerboard image provided by the invention;
FIG. 7 is a schematic diagram of a corrected checkerboard image provided by the present invention;
FIG. 8 is a graph illustrating the quantitative evaluation parameters of the calibration results provided by the present invention;
FIG. 9 is a diagram showing a result of calculating the levelness of a line marker point according to the present invention;
FIG. 10 is a schematic diagram of the calculation result of the verticality of the column mark points according to the present invention;
FIG. 11 is a schematic diagram of a calculation result of distribution uniformity of horizontal adjacent marker points according to the present invention;
Fig. 12 is a schematic diagram of a calculation result of distribution uniformity of vertically adjacent marker points according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide an image correction method and an image correction system for large-view-field plane vision measurement, which can finish accurate correction of images with large view fields and high distortion on the premise of no camera calibration, and improve image correction precision and correction efficiency.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
In order to more conveniently, rapidly and accurately complete image correction in large-view-field plane measurement based on a monocular camera and provide effective guarantee for subsequent measurement work, the invention establishes a deep learning network model based on monocular vision and a partition correction strategy, completes image correction without calculating internal and external parameters of the camera, and simultaneously provides an image correction result evaluation method for the image correction method.
Fig. 1 is a flowchart of an image correction method for large-view-field plane vision measurement. As shown in fig. 1, the image correction method for large-view-field plane vision measurement of the present invention includes:
Step 101: and obtaining a checkerboard image of the plane to be measured.
Specifically, a camera with a wide-angle lens is fixed above a plane view field to be measured, and an LED light source is used for supplementing light to a shooting area so as to reduce the influence of ambient light on shooting. And (3) taking the specially-made checkerboard calibration plate 1 as a calibration target to be horizontally placed on the plane 2 to be measured, and acquiring an image of the plane 2 to be measured by using a computer control camera. In the actual shooting process, shooting parameters such as exposure time, gain and the like of a camera are adjusted in real time according to the average gray value and definition of the image, and an image meeting the detection of mark points, namely a checkerboard image of the plane 2 to be detected shown in fig. 2, is obtained.
Step 102: and detecting the mark points of the checkered image, extracting the actual image coordinates of the mark points and partitioning the checkered image.
Preprocessing the acquired image, extracting the image coordinates of the marker points of the checkerboard calibration plate, and partitioning the marker points of the image according to the size of the field of view, the number of the marker points and the distribution condition of the marker points.
Thus, the step 102 specifically includes:
step 2.1: and detecting the mark points of the checkerboard image, and extracting the corner points of the checkerboard as the mark points.
Fig. 3 is a schematic diagram of checkerboard mark point detection provided by the present invention. Specifically, referring to fig. 3, binarizing is performed on a checkerboard image based on a local average self-adaptive thresholding method, the image is expanded and separated to join each of the four sides of the black block, a reduced four side of the black block is obtained, the four sides of the black block are detected based on constraint conditions such as length-width ratio, perimeter and area, each four side is taken as a unit, two opposite points of two diagonally adjacent four sides are taken as corner points 3, and the middle point of a connecting line of the two opposite points is taken as corner points. That is, four vertexes of a black block quadrangle are corner points.
Step 2.2: and extracting the actual image coordinates of the mark points.
Step 2.3: partitioning the checkerboard image according to the actual image coordinate distribution of the mark points, wherein the number of the mark points contained in each partition is greater than or equal to 60.
Specifically, in order to improve the accuracy of distortion correction, reduce the complexity of the deep learning network model and improve the training efficiency of the model, the whole checkerboard image is partitioned according to the size of the field of view, the number of the mark points and the actual image coordinate distribution condition of the mark points, and the number of the mark points contained in each partition is not less than 60. Fig. 4 is a schematic diagram of a checkerboard image partition provided by the present invention, and 24 partitions are divided in fig. 4.
Step 103: and setting ideal image coordinates for the mark points, and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points.
The step 103 specifically includes:
step 3.1: and setting ideal image coordinates for the mark points according to the actual image coordinate distribution of the mark points and the image correction targets.
Specifically, the purpose of image correction for large-field-of-view plane vision measurement is to obtain a distortion-free front view of the plane to be measured, the row marker points and the column marker points marked by the checkerboard are orthogonally distributed, and the lateral distance and the longitudinal distance of adjacent marker points are equal, so that ideal image coordinates can be set for each marker point detected in step 102 according to the marker point distribution condition and the image correction target.
Step 3.2: and forming a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
Step 104: and establishing a deep learning network model, and training the deep learning network model for each partition by utilizing the training database of each partition to generate a trained image correction model of each partition.
The step 104 specifically includes:
step 4.1: and establishing a deep learning network model.
Specifically, fig. 5 is a schematic diagram of a deep learning network model provided by the present invention. The invention builds a deep learning network model by adopting a DNN deep neural network as shown in fig. 5, wherein the model comprises an input layer, 2 hidden layers and an output layer. The input layer and the output layer both comprise 2 neurons, and each represent an actual image coordinate (col, row) of a mark point and a corresponding ideal image coordinate (col ', row '), wherein row and row ' represent row coordinates of the mark point; col and col' represent column coordinates. That is, the input of the deep learning network model is the actual image coordinates of the landmark, one of the neurons is the row coordinates of the landmark, and the other neuron is the column coordinates of the landmark. The output of the deep learning network model is the ideal image coordinate corresponding to the mark point, wherein the output of one neuron is the ideal row coordinate corresponding to the mark point, and the output of the other neuron is the ideal column coordinate corresponding to the mark point.
Normalizing the row coordinates and the column coordinates in each partition of the image by using a formula (1) to obtain an actual image coordinate (x 1,y1), an ideal image coordinate (x 2,y2), wherein the formula is as follows:
Wherein z i is the vector to be normalized; min (Z) is the minimum value of the elements in the vector Z; max (Z) is the maximum value of the elements in vector Z; z' i is the normalized vector.
In the process of normalizing the row coordinates and the column coordinates of the image, the row coordinates of all the mark points in each partition are taken as a vector, and the normalization is carried out on the row vector by using a formula (1); and (3) taking the column coordinates of all the mark points as another vector, and carrying out normalization processing on the column vector by using the formula (1). And taking the actual image coordinate (x 1,y1) obtained after normalization processing in each partition and the corresponding ideal image coordinate (x 2,y2) as a training sample to form training data of each partition.
Step 4.2: training a deep learning network model for each partition using the training data for each partition.
Training is carried out by using the actual image coordinates and the ideal image coordinates of the mark points, and model parameters of each partition are calculated. In the model training process, the purpose of training is to reduce the difference between the predicted value and the sample label value, the difference is expressed by Euclidean distance of mean square error, and the loss function is defined as formula (2). And training the DNN deep learning network model of each partition through a series of stages such as model parameter initialization, learning rate adjustment, weight parameter updating and the like. And verifying the accuracy and the effectiveness of the model through the test points to obtain the trained image correction model of each partition. The loss function formula is as follows:
Wherein J (w, b) is a loss function; m is the number of mark points in each partition; x i1 is the row coordinate of the actual image coordinate of the i-th marker point; y i1 is the column coordinate of the actual image coordinate of the i-th marker point; x i2 is the row coordinate of the ideal image coordinate of the ith marker point; y i2 is the column coordinate of the ideal image coordinate of the ith marker point.
Step 105: and calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition, and calculating a remap matrix of each partition.
Specifically, the image correction model of each partition is used to calculate ideal image coordinates of all pixel points in each partition, and then a remap matrix H i2i of each partition is constructed as a mapping matrix of image correction according to the mapping relation between the actual image coordinates and the ideal image coordinates of all pixel points in each partition.
In order to improve the image correction efficiency, the image correction model of each partition is not utilized during the subsequent image correction to be detected, the remap matrix H i2i is used for calculating the coordinates (called correction coordinates) after the pixel point correction, the multithreading technology and the bilinear interpolation method are adopted for carrying out the distortion correction on the image, the standard front view of the plane to be detected is obtained, the guarantee is provided for the high-precision and large-view-field plane measurement, the calculation amount is reduced, the calculation speed is improved, and the image correction efficiency is further improved.
In order to verify the validity of the remap matrix H i2i, the invention uses the remap matrix H i2i to calculate the corrected coordinates of all pixels of the checkerboard image for verification.
Firstly, calculating coordinates after pixel point correction in each partition of the checkerboard image by using a remap matrix H i2i, wherein the calculation formula is as follows:
Wherein, The coordinate vector before pixel correction is the vector formed by the original coordinates (or actual coordinates) of the pixel; /(I)And correcting the coordinate vector formed by the coordinates for the corresponding pixel point, namely correcting the vector formed by the coordinates for the pixel point.
Fig. 6 is a schematic diagram of distribution of actual coordinates, ideal coordinates and corrected coordinates of a first partition mark point of a checkerboard image, in which 4 represents the actual coordinates (also called original coordinates) of the mark point, 5 represents the ideal coordinates (i.e. ideal image coordinates), and 6 represents the corrected coordinates (also called corrected coordinates). As shown in fig. 6, the coordinates after correction of the checkerboard image mark points are calculated to be highly coincident with the ideal coordinates by using the remap matrix H i2i.
And then, performing distortion correction on the checkerboard image by adopting a multithreading technology and a bilinear interpolation method, so that a standard front view of the checkerboard image can be obtained, as shown in fig. 7.
The invention also provides an image correction evaluation method based on the column mark point verticality, the row mark point levelness and the adjacent mark point distribution uniformity, and the accuracy and the effectiveness of the image correction method for large-view-field plane vision measurement are verified. The evaluation method is an effective quantitative evaluation method for plane measurement image correction.
Specifically, the mark points of the corrected checkerboard image shown in fig. 7 are extracted, and the verticality of the column mark points, the levelness of the row mark points and the distribution uniformity of the adjacent mark points are calculated, wherein each parameter is illustrated in fig. 8.
The calculation formula of the column mark point perpendicularity VM is as follows:
Wherein n is the number of the mark points in each column; m is the number of marking points in each row; x ij is the x coordinate of the ith marker point in the jth column; AVG (X j) represents the X-coordinate mean of all marker points in the j-th column.
The formula of the row mark point levelness calculation HM is as follows:
Wherein y ij is the y coordinate of the ith corner point in the jth row; AVG (Y j) represents the Y-coordinate mean of all corner points of row j.
The UHM calculation formula of the distribution uniformity of the horizontal adjacent mark points is as follows:
Wherein x jk is the x coordinate of the kth marker point in the jth row; x jk-1 is the x coordinate of the kth-1 marker point in the jth row.
The UVM calculation formula for the distribution uniformity of the vertically adjacent marker points is as follows:
wherein y jk is the y coordinate of the kth corner in the jth column; y jk-1 is the y-coordinate of the kth-1 corner in the jth column.
The better the verticality, levelness and uniformity, i.e. the smaller the element values in the VM, HM, UHM, UVM vectors, the better the effect of image correction is proved.
By adopting the image correction evaluation method, the accuracy of the remap matrix H i2i is verified. And then, when the later image to be measured is corrected, the corrected coordinates of the pixel points can be calculated by using a remap matrix H i2i without using an image correction model of each partition, and the image is subjected to distortion correction by using a multithreading technology and a bilinear interpolation method, so that the standard front view of the plane to be measured can be obtained.
Step 106: and obtaining a plane image to be detected and partitioning the plane image to be detected.
Specifically, a camera with a wide-angle lens is fixed above a field of view of a plane to be measured, the shooting area is supplemented with light by using an LED light source, so that the influence of ambient light on shooting is reduced, and a computer is used for controlling the camera to acquire an image of the plane to be measured. And partitioning the plane image to be detected according to the partitioning rule of the checkerboard image shot under the view field.
Step 107: and generating an undistorted front view of each partition of the plane to be measured by using the remap matrix of each partition.
And calculating corrected coordinates of pixel points in the subareas of the plane image to be detected based on a formula (3) by utilizing a remap matrix H i2i of each subarea, and carrying out distortion correction on the subarea image by a bilinear interpolation method to obtain an undistorted front view of the subarea.
Firstly, calculating coordinates after pixel point correction in each partition of the checkerboard image by using a remap matrix H i2i, wherein the calculation formula is as follows:
Wherein, The coordinate vector before pixel correction is the vector formed by the original coordinates (or actual coordinates) of the pixel; /(I)And correcting the coordinate vector formed by the coordinates for the corresponding pixel point, namely correcting the vector formed by the coordinates for the pixel point.
The image correction method based on the deep learning network model avoids calculation of internal and external parameters and lens distortion coefficients of a camera, combines the traditional image correction mark point extraction and AI image processing deep learning method, does not need to extract image features through convolution, and ensures image correction precision and correction efficiency to the greatest extent.
In order to realize the large-view-field plane measurement, the invention also provides a transformation matrix of the corrected image front view coordinates and the plane physical coordinates, and the large-view-field plane measurement is realized.
Specifically, the coordinates of the marker points of the standard front view of the plane image to be measured are extracted, the plane physical coordinates of the plane to be measured are constructed, a remap matrix H i2w is calculated, the transformation between the coordinates of the marker points and the physical coordinates is performed based on H i2w, and the calculation formula of the physical coordinate vector is as follows:
Wherein, Is an image coordinate vector; /(I)Is the corresponding physical coordinate vector.
Step 108: and splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
The method comprises the steps of fixing a camera with a wide-angle lens above a field of view of a plane to be measured, placing a calibration target on the plane to be measured, and controlling the camera to acquire an image of the plane to be measured by using a computer; preprocessing an image, extracting the image coordinates of marking points of a calibration target, and establishing ideal image coordinates for each marking point according to the actual distribution condition of the marks; partitioning the mark points of the image according to the size of the view field, the number of the mark points and the distribution condition of the mark points; establishing a deep learning network model, training by using actual image coordinates and ideal image coordinates of the mark points, and calculating model parameters of each region; and (3) using a trained model, and using a multithreading technology and a bilinear interpolation method to finish the correction of the image. And evaluating an image correction result based on the column mark point verticality, the row mark point levelness and the adjacent mark point distribution uniformity. In order to improve the image correction efficiency, a trained network model is utilized to generate a mapping matrix for image correction. Therefore, the invention completes the accurate correction of the large-view-field and high-distortion image suitable for plane vision measurement on the premise of not depending on the internal and external parameters of the camera.
One specific embodiment of the image correction method for large field-of-view planar vision measurement of the present invention is provided below.
One specific implementation process of the image correction method of the invention comprises the following steps:
1. Acquiring a plane image to be measured: a specially-made checkerboard calibration plate is placed on a plane to be measured, the physical size of the plane to be measured is 3000mm multiplied by 2000mm, the physical size of the checkerboard is 50mm multiplied by 50mm, the number of marking points of each row is 63, and the number of marking points of each column is 39. And arranging a camera with a wide-angle lens above the plane to be measured, starting a real-time acquisition mode of the camera, adjusting shooting parameters such as exposure time, gain and the like of the camera in real time according to the average gray value and definition of the image, and capturing a high-quality checkerboard image, wherein the resolution of the camera is 3664 multiplied by 2748, as shown in fig. 2.
2. Marker point detection and partitioning: binarization is carried out on the checkerboard image based on a local average self-adaptive thresholding method, the connection of each black block quadrangle is separated through image expansion, the reduced black block quadrangle is obtained, each quadrangle is detected based on constraint of length-width ratio, perimeter, area and the like, two opposite points of two diagonally adjacent quadrangles are taken as a unit, the middle point of a connecting line of the two opposite points is taken as a corner point, and the corner point is taken as a mark point as shown in fig. 3. According to the coordinate distribution of the marker points, the whole image is divided into 6×4=24 areas, and as shown in fig. 4, the number of marker points contained in each area is not less than 90.
3. Generating a training database: the 63×39=2457 mark points are detected in the last step, the row corner points and the column corner points of the checkerboard are orthogonally distributed, the transverse distance and the longitudinal distance of the adjacent corner points are equal, and ideal image coordinates are set for each mark point according to the corner point distribution and the image correction targets. The ideal image coordinates of the marker point in the upper left corner are set to (127, 329), the point is set to the horizontal right as the positive X-axis direction, the vertical down as the positive Y-axis direction, and the ideal image coordinates are set for each marker point in 50 pixels as step sizes. Further, a training database is constructed using the actual image coordinates and the ideal image coordinates of the marker points of each partition, and a total of 24 partitioned training databases are generated.
4. Constructing, training and deep learning network models: and constructing DNN (deep neural network) models for each partition, constructing 24 DNN models in total, and training the corresponding DNN models by using a training database of each partition to obtain 24 trained image correction models.
5. Correcting an image: and calculating the coordinate mapping relation of each pixel point in each partition before and after correction according to the trained image correction model of each partition, and constructing a remap matrix H i2i. And carrying out distortion correction on the image by a bilinear interpolation method to obtain a front view of each subarea of the checkerboard image.
The undistorted elevation of each partition of the checkerboard image is stitched, resulting in an undistorted elevation of the entire image of the checkerboard, as shown in FIG. 7.
6. Evaluation of image correction results: extracting mark point coordinates of an undistorted front view of a checkerboard image, calculating column mark point verticality, row mark point levelness and adjacent mark point distribution uniformity, correcting and evaluating the corrected image based on the column mark point verticality, the row mark point levelness and the adjacent mark point distribution uniformity, and verifying the accuracy and the effectiveness of the image correction method for large-view-field plane vision measurement. Fig. 9 is a schematic diagram of a calculation result of the levelness (Horizontal degree ofline corner) of the line mark point according to the present invention. The horizontal degree HM of the line mark point is a 39×1 vector, and as shown in fig. 9, the horizontal axis RowIndex is the number of lines of mark points, the vertical axis HM is the horizontal degree of the line mark point, and the horizontal degree value of the 39 line mark points is shown in the figure. Fig. 10 is a schematic diagram of a calculation result of the verticality (VERTICAL DEGREE ofcolumn corner) of the column mark point provided by the present invention. The Column mark verticality VM is a 63×1 vector, and as shown in fig. 10, the abscissa Column Index is the number of columns of mark points, the ordinate VM is the Column mark verticality, and the figure shows the verticality value of 63 columns of mark points. Fig. 11 is a schematic diagram of a calculation result of distribution uniformity (Distribution uniformity of horizontal corner points) of horizontal adjacent marker points provided by the present invention. The distribution uniformity of the horizontal adjacent marker points UHM is a 39×1 vector, and as shown in fig. 11, the abscissa Row Index is the number of marker point rows, the ordinate UHM is the distribution uniformity of the horizontal adjacent marker points, and the figure shows the 39-Row marker point horizontal adjacent distribution uniformity value. Fig. 12 is a schematic diagram showing a calculation result of the distribution uniformity (Distribution uniformity of Vertical corner points) of the vertically adjacent marker points provided by the present invention. The distribution uniformity UVM of vertically adjacent marker points is a 63×1 vector, and as shown in fig. 12, the abscissa Column Index is the number of columns of marker points, the ordinate UVM is the distribution uniformity of vertically adjacent marker points, and the figure shows the number of vertically adjacent distribution uniformity of 63 columns of marker points. The VM is less than 0.045, the HM is less than 0.5, the UHM is less than 0.12, and the UVM is less than 0.11, so that the effectiveness of the image correction method facing the large-view-field plane vision measurement is fully verified.
Based on the method provided by the invention, the invention also provides an image correction system for large-view-field plane vision measurement, which comprises the following steps:
The checkerboard image acquisition module is used for acquiring a checkerboard image of a plane to be detected; and a checkerboard calibration plate is arranged on the plane to be measured.
And the marking point detection and partitioning module is used for carrying out marking point detection on the checkerboard image, extracting the actual image coordinates of the marking points and partitioning the checkerboard image.
And the training database establishing module is used for setting ideal image coordinates for the mark points and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points.
The image correction model building module is used for building a deep learning network model, training the deep learning network model for each partition by utilizing the training database of each partition, and generating a trained image correction model for each partition.
And the remap matrix building module is used for calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition and calculating the remap matrix of each partition.
The plane image acquisition module to be detected is used for acquiring the plane image to be detected and partitioning the plane image to be detected.
And the partition undistorted elevation view generation module is used for generating an undistorted elevation view of each partition of the plane to be measured by using the remap matrix of each partition.
The undistorted front view generation module of the plane to be detected is used for splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
The checkerboard image acquisition module specifically comprises:
the camera setting unit is used for fixing a camera with a wide-angle lens above the plane view field to be detected, and supplementing light to a shooting area by utilizing an LED light source;
The checkerboard image acquisition unit is used for placing the checkerboard calibration plate on the plane to be detected, and the computer is used for controlling the camera to shoot and acquire the checkerboard image.
The mark point detection and partitioning module specifically comprises:
the mark point detection unit is used for detecting mark points of the checkerboard image and extracting corner points of the checkerboard as the mark points;
An actual image coordinate extracting unit, configured to extract actual image coordinates of the marker points;
The partitioning unit is used for partitioning the checkerboard image according to the actual image coordinate distribution of the mark points; and the number of the mark points contained in each partition is greater than or equal to 60.
The training database building module specifically comprises:
An ideal image coordinate setting unit for setting ideal image coordinates for the marker points according to actual image coordinate distribution of the marker points and the image correction target;
and the training database establishing unit is used for constructing a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
The remap matrix establishment module specifically comprises:
An ideal image coordinate calculation unit, configured to calculate ideal image coordinates of all pixel points in each partition by using the image correction model of each partition;
And the remap matrix establishing unit is used for establishing a remap matrix of each partition according to the mapping relation between the actual image coordinates and the ideal image coordinates of all the pixel points in each partition.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. An image correction method for large-view-field plane vision measurement is characterized by comprising the following steps:
obtaining a checkerboard image of a plane to be measured; a checkerboard calibration plate is arranged on the plane to be measured;
Detecting mark points of the checkerboard image, extracting actual image coordinates of the mark points and partitioning the checkerboard image;
setting ideal image coordinates for the mark points, and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points;
Establishing a deep learning network model, and training the deep learning network model for each partition by utilizing a training database of each partition to generate a trained image correction model of each partition;
Calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition, and calculating a remap matrix of each partition; the method specifically comprises the following steps: calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition; constructing a remap matrix of each partition according to the mapping relation between the actual image coordinates and the ideal image coordinates of all pixel points in each partition;
acquiring a plane image to be detected and partitioning the plane image to be detected;
Generating an undistorted front view of each partition of the plane to be tested by using the remap matrix of each partition;
and splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
2. The method for correcting an image according to claim 1, wherein the step of acquiring a checkerboard image of a plane to be measured comprises:
Fixing a camera with a wide-angle lens above the plane view field to be detected, and supplementing light to a shooting area by utilizing an LED light source;
And placing the checkerboard calibration plate on the plane to be tested, and controlling the camera to shoot by using a computer to acquire the checkerboard image.
3. The image correction method according to claim 1, wherein the performing the marker point detection on the checkerboard image, extracting the actual image coordinates of the marker point, and partitioning the checkerboard image, specifically includes:
Detecting mark points of the checkerboard image, and extracting corner points of the checkerboard as the mark points;
Extracting actual image coordinates of the mark points;
partitioning the checkerboard image according to the actual image coordinate distribution of the mark points; and the number of the mark points contained in each partition is greater than or equal to 60.
4. The image correction method according to claim 1, wherein the setting of ideal image coordinates for the marker point creates a training database for each partition based on the actual image coordinates and ideal image coordinates of the marker point, specifically comprising:
setting ideal image coordinates for the mark points according to the actual image coordinate distribution of the mark points and the image correction targets;
and forming a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
5. An image correction system for large field-of-view planar vision measurement, comprising:
the checkerboard image acquisition module is used for acquiring a checkerboard image of a plane to be detected; a checkerboard calibration plate is arranged on the plane to be measured;
The mark point detection and partitioning module is used for detecting mark points of the checkerboard image, extracting actual image coordinates of the mark points and partitioning the checkerboard image;
the training database establishing module is used for setting ideal image coordinates for the mark points and establishing a training database of each partition according to the actual image coordinates and the ideal image coordinates of the mark points;
The image correction model building module is used for building a deep learning network model, training the deep learning network model for each partition by utilizing the training database of each partition, and generating a trained image correction model for each partition;
The remap matrix building module is used for calculating ideal image coordinates of all pixel points in each partition by using the image correction model of each partition and calculating the remap matrix of each partition; the remap matrix establishment module specifically comprises: an ideal image coordinate calculation unit, configured to calculate ideal image coordinates of all pixel points in each partition by using the image correction model of each partition; the remap matrix establishing unit is used for establishing a remap matrix of each partition according to the mapping relation between the actual image coordinates and the ideal image coordinates of all pixel points in each partition;
the plane image acquisition module to be detected is used for acquiring a plane image to be detected and partitioning the plane image to be detected;
The partition undistorted elevation view generation module is used for generating an undistorted elevation view of each partition of the plane to be measured by using the remap matrix of each partition;
the undistorted front view generation module of the plane to be detected is used for splicing the undistorted front views of all the subareas of the plane to be detected to generate an undistorted front view of the plane image to be detected.
6. The image correction system as claimed in claim 5, wherein said checkerboard image acquisition module comprises:
the camera setting unit is used for fixing a camera with a wide-angle lens above the plane view field to be detected, and supplementing light to a shooting area by utilizing an LED light source;
The checkerboard image acquisition unit is used for placing the checkerboard calibration plate on the plane to be detected, and the computer is used for controlling the camera to shoot and acquire the checkerboard image.
7. The image correction system as claimed in claim 5, wherein said marker point detection and partitioning module comprises:
the mark point detection unit is used for detecting mark points of the checkerboard image and extracting corner points of the checkerboard as the mark points;
An actual image coordinate extracting unit, configured to extract actual image coordinates of the marker points;
The partitioning unit is used for partitioning the checkerboard image according to the actual image coordinate distribution of the mark points; and the number of the mark points contained in each partition is greater than or equal to 60.
8. The image correction system of claim 5, wherein said training database creation module specifically comprises:
An ideal image coordinate setting unit for setting ideal image coordinates for the marker points according to actual image coordinate distribution of the marker points and the image correction target;
and the training database establishing unit is used for constructing a training database of each partition according to the actual image coordinates and the ideal image coordinates of all the mark points in each partition.
CN202210428357.4A 2022-04-22 2022-04-22 Image correction method and system for large-view-field plane vision measurement Active CN114820787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210428357.4A CN114820787B (en) 2022-04-22 2022-04-22 Image correction method and system for large-view-field plane vision measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210428357.4A CN114820787B (en) 2022-04-22 2022-04-22 Image correction method and system for large-view-field plane vision measurement

Publications (2)

Publication Number Publication Date
CN114820787A CN114820787A (en) 2022-07-29
CN114820787B true CN114820787B (en) 2024-05-28

Family

ID=82504907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210428357.4A Active CN114820787B (en) 2022-04-22 2022-04-22 Image correction method and system for large-view-field plane vision measurement

Country Status (1)

Country Link
CN (1) CN114820787B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942796A (en) * 2014-04-23 2014-07-23 清华大学 High-precision projector and camera calibration system and method
EP3200148A4 (en) * 2014-10-31 2017-08-02 Huawei Technologies Co., Ltd. Image processing method and device
CN107657643A (en) * 2017-08-28 2018-02-02 浙江工业大学 A kind of parallax calculation method based on space plane constraint
CN108198219A (en) * 2017-11-21 2018-06-22 合肥工业大学 Error compensation method for camera calibration parameters for photogrammetry
CN108760767A (en) * 2018-05-31 2018-11-06 电子科技大学 Large-size LCD Screen defect inspection method based on machine vision
CN108876749A (en) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 A kind of lens distortion calibration method of robust
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
CN110660034A (en) * 2019-10-08 2020-01-07 北京迈格威科技有限公司 Image correction method and device and electronic equipment
CN111667536A (en) * 2019-03-09 2020-09-15 华东交通大学 Parameter calibration method based on zoom camera depth estimation
WO2021115071A1 (en) * 2019-12-12 2021-06-17 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device
CN113358061A (en) * 2021-05-31 2021-09-07 东南大学 Single stripe three-dimensional point cloud measuring method for end-to-end calibration of deep learning network
CN113850195A (en) * 2021-09-27 2021-12-28 杭州东信北邮信息技术有限公司 AI intelligent object identification method based on 3D vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276734B (en) * 2019-06-24 2021-03-23 Oppo广东移动通信有限公司 Image distortion correction method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942796A (en) * 2014-04-23 2014-07-23 清华大学 High-precision projector and camera calibration system and method
EP3200148A4 (en) * 2014-10-31 2017-08-02 Huawei Technologies Co., Ltd. Image processing method and device
CN107657643A (en) * 2017-08-28 2018-02-02 浙江工业大学 A kind of parallax calculation method based on space plane constraint
CN108198219A (en) * 2017-11-21 2018-06-22 合肥工业大学 Error compensation method for camera calibration parameters for photogrammetry
CN108760767A (en) * 2018-05-31 2018-11-06 电子科技大学 Large-size LCD Screen defect inspection method based on machine vision
CN108876749A (en) * 2018-07-02 2018-11-23 南京汇川工业视觉技术开发有限公司 A kind of lens distortion calibration method of robust
CN108986172A (en) * 2018-07-25 2018-12-11 西北工业大学 A kind of single-view linear camera scaling method towards small depth of field system
CN111667536A (en) * 2019-03-09 2020-09-15 华东交通大学 Parameter calibration method based on zoom camera depth estimation
CN110660034A (en) * 2019-10-08 2020-01-07 北京迈格威科技有限公司 Image correction method and device and electronic equipment
WO2021115071A1 (en) * 2019-12-12 2021-06-17 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device
CN113358061A (en) * 2021-05-31 2021-09-07 东南大学 Single stripe three-dimensional point cloud measuring method for end-to-end calibration of deep learning network
CN113850195A (en) * 2021-09-27 2021-12-28 杭州东信北邮信息技术有限公司 AI intelligent object identification method based on 3D vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Laigang Zhang ; Yibin Li ; Yongjun Zhao ; Qun Sun ; Ying Zhao.High Precision Monocular Plane Measurement for Large Field of View.2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER).2019,全文. *
W. Turner ; LTD ; .The Document Architecture for the Cornell Digital Library.IETF rfc1691.1994,全文. *
基于FPGA的双目立体视觉图像采集处理系统设计;王东伟;航空精密制造技术;20180215;全文 *
张来刚 ; 魏仲慧 ; 何昕 ; 孙群.多约束融合算法在多摄像机测量系统中的应用.液晶与显示.2013,全文. *
牛苗苗 ; 孙灿 ; 熊海涵.基于tsai方法的多项畸变模型单目摄像机标定.科技与企业.2015,全文. *

Also Published As

Publication number Publication date
CN114820787A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN106558080B (en) Monocular camera external parameter online calibration method
CN109598762B (en) High-precision binocular camera calibration method
CN107507235B (en) Registration method of color image and depth image acquired based on RGB-D equipment
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN110031829B (en) Target accurate distance measurement method based on monocular vision
CN109523595B (en) Visual measurement method for linear angular spacing of building engineering
CN108288294A (en) A kind of outer ginseng scaling method of a 3D phases group of planes
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN110443879B (en) Perspective error compensation method based on neural network
CN105654476B (en) Binocular calibration method based on Chaos particle swarm optimization algorithm
CN111210468A (en) Image depth information acquisition method and device
CN105389808A (en) Camera self-calibration method based on two vanishing points
CN102169573A (en) Real-time distortion correction method and system of lens with high precision and wide field of view
CN105631844A (en) Image camera calibration method
CN108492282B (en) Three-dimensional gluing detection based on line structured light and multitask cascade convolution neural network
CN114998448B (en) Multi-constraint binocular fisheye camera calibration and space point positioning method
CN113393439A (en) Forging defect detection method based on deep learning
CN110097516B (en) Method, system and medium for correcting distortion of image on inner hole wall surface
CN106595702A (en) Astronomical-calibration-based spatial registration method for multiple sensors
CN115201883B (en) Moving target video positioning and speed measuring system and method
CN109974618A (en) The overall calibration method of multisensor vision measurement system
CN107492080A (en) Exempt from calibration easily monocular lens image radial distortion antidote
CN116152068A (en) Splicing method for solar panel images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant